00:00:00.000 Started by upstream project "autotest-per-patch" build number 132537 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.076 The recommended git tool is: git 00:00:00.076 using credential 00000000-0000-0000-0000-000000000002 00:00:00.078 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.188 Using shallow fetch with depth 1 00:00:00.188 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.188 > git --version # timeout=10 00:00:00.237 > git --version # 'git version 2.39.2' 00:00:00.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.277 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.277 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.572 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.586 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.598 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.598 > git config core.sparsecheckout # timeout=10 00:00:05.612 > git read-tree -mu HEAD # timeout=10 00:00:05.629 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.653 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.653 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.759 [Pipeline] Start of Pipeline 00:00:05.777 [Pipeline] library 00:00:05.779 Loading library shm_lib@master 00:00:05.779 Library shm_lib@master is cached. Copying from home. 00:00:05.797 [Pipeline] node 00:00:20.800 Still waiting to schedule task 00:00:20.801 Waiting for next available executor on ‘vagrant-vm-host’ 00:04:53.280 Running on VM-host-WFP7 in /var/jenkins/workspace/nvme-vg-autotest 00:04:53.281 [Pipeline] { 00:04:53.294 [Pipeline] catchError 00:04:53.296 [Pipeline] { 00:04:53.310 [Pipeline] wrap 00:04:53.322 [Pipeline] { 00:04:53.331 [Pipeline] stage 00:04:53.334 [Pipeline] { (Prologue) 00:04:53.353 [Pipeline] echo 00:04:53.355 Node: VM-host-WFP7 00:04:53.362 [Pipeline] cleanWs 00:04:53.371 [WS-CLEANUP] Deleting project workspace... 00:04:53.371 [WS-CLEANUP] Deferred wipeout is used... 00:04:53.378 [WS-CLEANUP] done 00:04:53.601 [Pipeline] setCustomBuildProperty 00:04:53.714 [Pipeline] httpRequest 00:04:54.120 [Pipeline] echo 00:04:54.123 Sorcerer 10.211.164.101 is alive 00:04:54.135 [Pipeline] retry 00:04:54.138 [Pipeline] { 00:04:54.153 [Pipeline] httpRequest 00:04:54.158 HttpMethod: GET 00:04:54.159 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:04:54.159 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:04:54.160 Response Code: HTTP/1.1 200 OK 00:04:54.160 Success: Status code 200 is in the accepted range: 200,404 00:04:54.161 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:04:54.319 [Pipeline] } 00:04:54.355 [Pipeline] // retry 00:04:54.360 [Pipeline] sh 00:04:54.636 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:04:54.651 [Pipeline] httpRequest 00:04:55.052 [Pipeline] echo 00:04:55.054 Sorcerer 10.211.164.101 is alive 00:04:55.064 [Pipeline] retry 00:04:55.067 [Pipeline] { 00:04:55.081 [Pipeline] httpRequest 00:04:55.085 HttpMethod: GET 00:04:55.086 URL: http://10.211.164.101/packages/spdk_e93f0f9410d277727d5ce5fb7616a2608baa9462.tar.gz 00:04:55.086 Sending request to url: http://10.211.164.101/packages/spdk_e93f0f9410d277727d5ce5fb7616a2608baa9462.tar.gz 00:04:55.087 Response Code: HTTP/1.1 200 OK 00:04:55.088 Success: Status code 200 is in the accepted range: 200,404 00:04:55.088 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e93f0f9410d277727d5ce5fb7616a2608baa9462.tar.gz 00:04:57.524 [Pipeline] } 00:04:57.541 [Pipeline] // retry 00:04:57.548 [Pipeline] sh 00:04:57.841 + tar --no-same-owner -xf spdk_e93f0f9410d277727d5ce5fb7616a2608baa9462.tar.gz 00:05:00.443 [Pipeline] sh 00:05:00.724 + git -C spdk log --oneline -n5 00:05:00.724 e93f0f941 bdev/malloc: Support accel sequence when DIF is enabled 00:05:00.724 27c6508ea bdev: Add spdk_bdev_io_hide_metadata() for bdev modules 00:05:00.724 c86e5b182 bdev/malloc: Extract internal of verify_pi() for code reuse 00:05:00.724 97329b16b bdev/malloc: malloc_done() uses switch-case for clean up 00:05:00.724 afdec00e1 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:05:00.743 [Pipeline] writeFile 00:05:00.759 [Pipeline] sh 00:05:01.043 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:01.055 [Pipeline] sh 00:05:01.338 + cat autorun-spdk.conf 00:05:01.338 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:01.338 SPDK_TEST_NVME=1 00:05:01.338 SPDK_TEST_FTL=1 00:05:01.338 SPDK_TEST_ISAL=1 00:05:01.338 SPDK_RUN_ASAN=1 00:05:01.338 SPDK_RUN_UBSAN=1 00:05:01.338 SPDK_TEST_XNVME=1 00:05:01.338 SPDK_TEST_NVME_FDP=1 00:05:01.338 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:01.344 RUN_NIGHTLY=0 00:05:01.346 [Pipeline] } 00:05:01.358 [Pipeline] // stage 00:05:01.373 [Pipeline] stage 00:05:01.375 [Pipeline] { (Run VM) 00:05:01.388 [Pipeline] sh 00:05:01.672 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:01.672 + echo 'Start stage prepare_nvme.sh' 00:05:01.672 Start stage prepare_nvme.sh 00:05:01.672 + [[ -n 3 ]] 00:05:01.672 + disk_prefix=ex3 00:05:01.672 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:05:01.672 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:05:01.672 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:05:01.672 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:01.672 ++ SPDK_TEST_NVME=1 00:05:01.672 ++ SPDK_TEST_FTL=1 00:05:01.672 ++ SPDK_TEST_ISAL=1 00:05:01.672 ++ SPDK_RUN_ASAN=1 00:05:01.672 ++ SPDK_RUN_UBSAN=1 00:05:01.672 ++ SPDK_TEST_XNVME=1 00:05:01.672 ++ SPDK_TEST_NVME_FDP=1 00:05:01.672 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:01.672 ++ RUN_NIGHTLY=0 00:05:01.672 + cd /var/jenkins/workspace/nvme-vg-autotest 00:05:01.672 + nvme_files=() 00:05:01.672 + declare -A nvme_files 00:05:01.672 + backend_dir=/var/lib/libvirt/images/backends 00:05:01.672 + nvme_files['nvme.img']=5G 00:05:01.672 + nvme_files['nvme-cmb.img']=5G 00:05:01.672 + nvme_files['nvme-multi0.img']=4G 00:05:01.672 + nvme_files['nvme-multi1.img']=4G 00:05:01.672 + nvme_files['nvme-multi2.img']=4G 00:05:01.672 + nvme_files['nvme-openstack.img']=8G 00:05:01.672 + nvme_files['nvme-zns.img']=5G 00:05:01.672 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:01.672 + (( SPDK_TEST_FTL == 1 )) 00:05:01.672 + nvme_files["nvme-ftl.img"]=6G 00:05:01.672 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:01.672 + nvme_files["nvme-fdp.img"]=1G 00:05:01.672 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:01.672 + for nvme in "${!nvme_files[@]}" 00:05:01.672 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:05:01.672 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:01.672 + for nvme in "${!nvme_files[@]}" 00:05:01.672 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:05:01.672 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:05:01.672 + for nvme in "${!nvme_files[@]}" 00:05:01.672 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:05:01.672 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:01.672 + for nvme in "${!nvme_files[@]}" 00:05:01.672 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:05:01.672 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:01.672 + for nvme in "${!nvme_files[@]}" 00:05:01.672 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:05:01.672 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:01.672 + for nvme in "${!nvme_files[@]}" 00:05:01.672 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:05:01.672 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:01.672 + for nvme in "${!nvme_files[@]}" 00:05:01.672 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:05:01.930 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:01.930 + for nvme in "${!nvme_files[@]}" 00:05:01.930 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:05:01.930 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:05:01.930 + for nvme in "${!nvme_files[@]}" 00:05:01.930 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:05:02.496 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:02.496 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:05:02.496 + echo 'End stage prepare_nvme.sh' 00:05:02.496 End stage prepare_nvme.sh 00:05:02.509 [Pipeline] sh 00:05:02.792 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:02.792 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:05:02.792 00:05:02.792 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:05:02.792 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:05:02.792 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:05:02.792 HELP=0 00:05:02.792 DRY_RUN=0 00:05:02.792 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:05:02.792 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:05:02.792 NVME_AUTO_CREATE=0 00:05:02.792 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:05:02.792 NVME_CMB=,,,, 00:05:02.792 NVME_PMR=,,,, 00:05:02.792 NVME_ZNS=,,,, 00:05:02.792 NVME_MS=true,,,, 00:05:02.792 NVME_FDP=,,,on, 00:05:02.792 SPDK_VAGRANT_DISTRO=fedora39 00:05:02.792 SPDK_VAGRANT_VMCPU=10 00:05:02.792 SPDK_VAGRANT_VMRAM=12288 00:05:02.792 SPDK_VAGRANT_PROVIDER=libvirt 00:05:02.792 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:02.792 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:02.792 SPDK_OPENSTACK_NETWORK=0 00:05:02.792 VAGRANT_PACKAGE_BOX=0 00:05:02.792 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:02.792 FORCE_DISTRO=true 00:05:02.792 VAGRANT_BOX_VERSION= 00:05:02.792 EXTRA_VAGRANTFILES= 00:05:02.792 NIC_MODEL=virtio 00:05:02.792 00:05:02.792 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:05:02.792 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:05:06.075 Bringing machine 'default' up with 'libvirt' provider... 00:05:06.075 ==> default: Creating image (snapshot of base box volume). 00:05:06.333 ==> default: Creating domain with the following settings... 00:05:06.333 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732644539_79067f5482b1a5631816 00:05:06.333 ==> default: -- Domain type: kvm 00:05:06.333 ==> default: -- Cpus: 10 00:05:06.333 ==> default: -- Feature: acpi 00:05:06.333 ==> default: -- Feature: apic 00:05:06.333 ==> default: -- Feature: pae 00:05:06.333 ==> default: -- Memory: 12288M 00:05:06.333 ==> default: -- Memory Backing: hugepages: 00:05:06.333 ==> default: -- Management MAC: 00:05:06.333 ==> default: -- Loader: 00:05:06.333 ==> default: -- Nvram: 00:05:06.333 ==> default: -- Base box: spdk/fedora39 00:05:06.333 ==> default: -- Storage pool: default 00:05:06.333 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732644539_79067f5482b1a5631816.img (20G) 00:05:06.333 ==> default: -- Volume Cache: default 00:05:06.333 ==> default: -- Kernel: 00:05:06.333 ==> default: -- Initrd: 00:05:06.333 ==> default: -- Graphics Type: vnc 00:05:06.333 ==> default: -- Graphics Port: -1 00:05:06.333 ==> default: -- Graphics IP: 127.0.0.1 00:05:06.333 ==> default: -- Graphics Password: Not defined 00:05:06.333 ==> default: -- Video Type: cirrus 00:05:06.333 ==> default: -- Video VRAM: 9216 00:05:06.333 ==> default: -- Sound Type: 00:05:06.333 ==> default: -- Keymap: en-us 00:05:06.333 ==> default: -- TPM Path: 00:05:06.333 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:06.333 ==> default: -- Command line args: 00:05:06.333 ==> default: -> value=-device, 00:05:06.333 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:06.333 ==> default: -> value=-drive, 00:05:06.333 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:05:06.333 ==> default: -> value=-device, 00:05:06.333 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:05:06.333 ==> default: -> value=-device, 00:05:06.333 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:06.333 ==> default: -> value=-drive, 00:05:06.333 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:05:06.333 ==> default: -> value=-device, 00:05:06.333 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:06.333 ==> default: -> value=-device, 00:05:06.334 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:05:06.334 ==> default: -> value=-drive, 00:05:06.334 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:05:06.334 ==> default: -> value=-device, 00:05:06.334 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:06.334 ==> default: -> value=-drive, 00:05:06.334 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:05:06.334 ==> default: -> value=-device, 00:05:06.334 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:06.334 ==> default: -> value=-drive, 00:05:06.334 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:05:06.334 ==> default: -> value=-device, 00:05:06.334 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:06.334 ==> default: -> value=-device, 00:05:06.334 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:05:06.334 ==> default: -> value=-device, 00:05:06.334 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:05:06.334 ==> default: -> value=-drive, 00:05:06.334 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:05:06.334 ==> default: -> value=-device, 00:05:06.334 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:06.334 ==> default: Creating shared folders metadata... 00:05:06.334 ==> default: Starting domain. 00:05:07.713 ==> default: Waiting for domain to get an IP address... 00:05:25.793 ==> default: Waiting for SSH to become available... 00:05:27.174 ==> default: Configuring and enabling network interfaces... 00:05:33.759 default: SSH address: 192.168.121.21:22 00:05:33.759 default: SSH username: vagrant 00:05:33.759 default: SSH auth method: private key 00:05:36.292 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:44.406 ==> default: Mounting SSHFS shared folder... 00:05:46.306 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:46.306 ==> default: Checking Mount.. 00:05:47.721 ==> default: Folder Successfully Mounted! 00:05:47.721 ==> default: Running provisioner: file... 00:05:48.674 default: ~/.gitconfig => .gitconfig 00:05:49.242 00:05:49.242 SUCCESS! 00:05:49.242 00:05:49.242 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:05:49.242 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:49.242 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:05:49.242 00:05:49.250 [Pipeline] } 00:05:49.268 [Pipeline] // stage 00:05:49.278 [Pipeline] dir 00:05:49.279 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:05:49.281 [Pipeline] { 00:05:49.293 [Pipeline] catchError 00:05:49.294 [Pipeline] { 00:05:49.310 [Pipeline] sh 00:05:49.597 + vagrant ssh-config --host vagrant+ 00:05:49.597 + sed -netee /^Host/,$p ssh_conf 00:05:49.597 00:05:52.886 Host vagrant 00:05:52.886 HostName 192.168.121.21 00:05:52.886 User vagrant 00:05:52.886 Port 22 00:05:52.886 UserKnownHostsFile /dev/null 00:05:52.886 StrictHostKeyChecking no 00:05:52.886 PasswordAuthentication no 00:05:52.886 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:52.886 IdentitiesOnly yes 00:05:52.886 LogLevel FATAL 00:05:52.886 ForwardAgent yes 00:05:52.886 ForwardX11 yes 00:05:52.886 00:05:52.902 [Pipeline] withEnv 00:05:52.905 [Pipeline] { 00:05:52.921 [Pipeline] sh 00:05:53.208 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:53.209 source /etc/os-release 00:05:53.209 [[ -e /image.version ]] && img=$(< /image.version) 00:05:53.209 # Minimal, systemd-like check. 00:05:53.209 if [[ -e /.dockerenv ]]; then 00:05:53.209 # Clear garbage from the node's name: 00:05:53.209 # agt-er_autotest_547-896 -> autotest_547-896 00:05:53.209 # $HOSTNAME is the actual container id 00:05:53.209 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:53.209 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:53.209 # We can assume this is a mount from a host where container is running, 00:05:53.209 # so fetch its hostname to easily identify the target swarm worker. 00:05:53.209 container="$(< /etc/hostname) ($agent)" 00:05:53.209 else 00:05:53.209 # Fallback 00:05:53.209 container=$agent 00:05:53.209 fi 00:05:53.209 fi 00:05:53.209 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:53.209 00:05:53.479 [Pipeline] } 00:05:53.495 [Pipeline] // withEnv 00:05:53.506 [Pipeline] setCustomBuildProperty 00:05:53.522 [Pipeline] stage 00:05:53.525 [Pipeline] { (Tests) 00:05:53.544 [Pipeline] sh 00:05:53.825 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:54.100 [Pipeline] sh 00:05:54.387 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:54.662 [Pipeline] timeout 00:05:54.662 Timeout set to expire in 50 min 00:05:54.664 [Pipeline] { 00:05:54.678 [Pipeline] sh 00:05:54.960 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:55.527 HEAD is now at e93f0f941 bdev/malloc: Support accel sequence when DIF is enabled 00:05:55.537 [Pipeline] sh 00:05:55.815 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:56.087 [Pipeline] sh 00:05:56.366 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:56.639 [Pipeline] sh 00:05:56.946 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:05:57.203 ++ readlink -f spdk_repo 00:05:57.203 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:57.203 + [[ -n /home/vagrant/spdk_repo ]] 00:05:57.203 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:57.203 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:57.203 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:57.203 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:57.203 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:57.203 + [[ nvme-vg-autotest == pkgdep-* ]] 00:05:57.203 + cd /home/vagrant/spdk_repo 00:05:57.203 + source /etc/os-release 00:05:57.203 ++ NAME='Fedora Linux' 00:05:57.203 ++ VERSION='39 (Cloud Edition)' 00:05:57.203 ++ ID=fedora 00:05:57.203 ++ VERSION_ID=39 00:05:57.203 ++ VERSION_CODENAME= 00:05:57.203 ++ PLATFORM_ID=platform:f39 00:05:57.203 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:57.203 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:57.203 ++ LOGO=fedora-logo-icon 00:05:57.203 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:57.203 ++ HOME_URL=https://fedoraproject.org/ 00:05:57.203 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:57.203 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:57.203 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:57.203 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:57.203 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:57.203 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:57.203 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:57.203 ++ SUPPORT_END=2024-11-12 00:05:57.203 ++ VARIANT='Cloud Edition' 00:05:57.203 ++ VARIANT_ID=cloud 00:05:57.203 + uname -a 00:05:57.203 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:57.203 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:57.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.029 Hugepages 00:05:58.029 node hugesize free / total 00:05:58.029 node0 1048576kB 0 / 0 00:05:58.029 node0 2048kB 0 / 0 00:05:58.029 00:05:58.029 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:58.029 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:58.029 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:58.029 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:58.029 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:58.029 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:58.029 + rm -f /tmp/spdk-ld-path 00:05:58.029 + source autorun-spdk.conf 00:05:58.029 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:58.029 ++ SPDK_TEST_NVME=1 00:05:58.029 ++ SPDK_TEST_FTL=1 00:05:58.029 ++ SPDK_TEST_ISAL=1 00:05:58.029 ++ SPDK_RUN_ASAN=1 00:05:58.029 ++ SPDK_RUN_UBSAN=1 00:05:58.029 ++ SPDK_TEST_XNVME=1 00:05:58.029 ++ SPDK_TEST_NVME_FDP=1 00:05:58.029 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:58.029 ++ RUN_NIGHTLY=0 00:05:58.029 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:58.029 + [[ -n '' ]] 00:05:58.029 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:58.029 + for M in /var/spdk/build-*-manifest.txt 00:05:58.029 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:58.029 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:58.029 + for M in /var/spdk/build-*-manifest.txt 00:05:58.029 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:58.029 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:58.029 + for M in /var/spdk/build-*-manifest.txt 00:05:58.029 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:58.029 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:58.029 ++ uname 00:05:58.029 + [[ Linux == \L\i\n\u\x ]] 00:05:58.029 + sudo dmesg -T 00:05:58.290 + sudo dmesg --clear 00:05:58.290 + dmesg_pid=5454 00:05:58.290 + sudo dmesg -Tw 00:05:58.290 + [[ Fedora Linux == FreeBSD ]] 00:05:58.290 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:58.290 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:58.290 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:58.290 + [[ -x /usr/src/fio-static/fio ]] 00:05:58.290 + export FIO_BIN=/usr/src/fio-static/fio 00:05:58.290 + FIO_BIN=/usr/src/fio-static/fio 00:05:58.290 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:58.290 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:58.290 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:58.290 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:58.290 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:58.290 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:58.290 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:58.290 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:58.290 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:58.290 18:09:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:58.290 18:09:51 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:58.290 18:09:51 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:58.290 18:09:51 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:05:58.290 18:09:51 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:05:58.290 18:09:51 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:05:58.290 18:09:51 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:05:58.290 18:09:51 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:05:58.290 18:09:51 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:05:58.290 18:09:51 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:05:58.290 18:09:51 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:58.290 18:09:51 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:05:58.290 18:09:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:58.290 18:09:51 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:58.550 18:09:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:58.550 18:09:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:58.550 18:09:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:58.550 18:09:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:58.550 18:09:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.550 18:09:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.550 18:09:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.550 18:09:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.550 18:09:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.550 18:09:51 -- paths/export.sh@5 -- $ export PATH 00:05:58.550 18:09:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.550 18:09:51 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:58.550 18:09:51 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:58.550 18:09:51 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732644591.XXXXXX 00:05:58.550 18:09:51 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732644591.dgABT4 00:05:58.550 18:09:51 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:58.550 18:09:51 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:58.550 18:09:51 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:58.550 18:09:51 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:58.550 18:09:51 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:58.550 18:09:51 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:58.550 18:09:51 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:58.550 18:09:51 -- common/autotest_common.sh@10 -- $ set +x 00:05:58.550 18:09:51 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:05:58.550 18:09:51 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:58.550 18:09:51 -- pm/common@17 -- $ local monitor 00:05:58.550 18:09:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.550 18:09:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.550 18:09:51 -- pm/common@25 -- $ sleep 1 00:05:58.550 18:09:51 -- pm/common@21 -- $ date +%s 00:05:58.550 18:09:51 -- pm/common@21 -- $ date +%s 00:05:58.550 18:09:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732644591 00:05:58.550 18:09:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732644591 00:05:58.550 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732644591_collect-vmstat.pm.log 00:05:58.550 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732644591_collect-cpu-load.pm.log 00:05:59.487 18:09:52 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:59.487 18:09:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:59.487 18:09:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:59.487 18:09:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:59.487 18:09:52 -- spdk/autobuild.sh@16 -- $ date -u 00:05:59.487 Tue Nov 26 06:09:52 PM UTC 2024 00:05:59.487 18:09:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:59.487 v25.01-pre-266-ge93f0f941 00:05:59.487 18:09:52 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:59.487 18:09:52 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:59.487 18:09:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:59.487 18:09:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:59.487 18:09:52 -- common/autotest_common.sh@10 -- $ set +x 00:05:59.487 ************************************ 00:05:59.487 START TEST asan 00:05:59.487 ************************************ 00:05:59.487 using asan 00:05:59.487 18:09:52 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:05:59.487 00:05:59.487 real 0m0.001s 00:05:59.487 user 0m0.000s 00:05:59.487 sys 0m0.000s 00:05:59.487 18:09:52 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:59.487 18:09:52 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:59.487 ************************************ 00:05:59.487 END TEST asan 00:05:59.487 ************************************ 00:05:59.487 18:09:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:59.487 18:09:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:59.487 18:09:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:59.487 18:09:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:59.487 18:09:52 -- common/autotest_common.sh@10 -- $ set +x 00:05:59.487 ************************************ 00:05:59.487 START TEST ubsan 00:05:59.487 ************************************ 00:05:59.487 using ubsan 00:05:59.487 18:09:52 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:59.487 00:05:59.487 real 0m0.000s 00:05:59.487 user 0m0.000s 00:05:59.487 sys 0m0.000s 00:05:59.487 18:09:52 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:59.487 18:09:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:59.487 ************************************ 00:05:59.487 END TEST ubsan 00:05:59.487 ************************************ 00:05:59.746 18:09:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:59.746 18:09:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:59.746 18:09:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:59.746 18:09:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:59.746 18:09:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:59.746 18:09:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:59.746 18:09:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:59.746 18:09:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:59.746 18:09:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:05:59.746 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:59.746 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:00.313 Using 'verbs' RDMA provider 00:06:19.337 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:31.539 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:32.368 Creating mk/config.mk...done. 00:06:32.368 Creating mk/cc.flags.mk...done. 00:06:32.368 Type 'make' to build. 00:06:32.368 18:10:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:32.368 18:10:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:32.368 18:10:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:32.368 18:10:25 -- common/autotest_common.sh@10 -- $ set +x 00:06:32.368 ************************************ 00:06:32.368 START TEST make 00:06:32.368 ************************************ 00:06:32.368 18:10:25 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:32.626 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:06:32.626 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:06:32.626 meson setup builddir \ 00:06:32.626 -Dwith-libaio=enabled \ 00:06:32.626 -Dwith-liburing=enabled \ 00:06:32.626 -Dwith-libvfn=disabled \ 00:06:32.626 -Dwith-spdk=disabled \ 00:06:32.626 -Dexamples=false \ 00:06:32.626 -Dtests=false \ 00:06:32.626 -Dtools=false && \ 00:06:32.626 meson compile -C builddir && \ 00:06:32.626 cd -) 00:06:32.626 make[1]: Nothing to be done for 'all'. 00:06:35.178 The Meson build system 00:06:35.178 Version: 1.5.0 00:06:35.178 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:06:35.178 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:35.178 Build type: native build 00:06:35.178 Project name: xnvme 00:06:35.178 Project version: 0.7.5 00:06:35.178 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:35.178 C linker for the host machine: cc ld.bfd 2.40-14 00:06:35.178 Host machine cpu family: x86_64 00:06:35.178 Host machine cpu: x86_64 00:06:35.178 Message: host_machine.system: linux 00:06:35.178 Compiler for C supports arguments -Wno-missing-braces: YES 00:06:35.178 Compiler for C supports arguments -Wno-cast-function-type: YES 00:06:35.178 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:06:35.178 Run-time dependency threads found: YES 00:06:35.178 Has header "setupapi.h" : NO 00:06:35.178 Has header "linux/blkzoned.h" : YES 00:06:35.178 Has header "linux/blkzoned.h" : YES (cached) 00:06:35.178 Has header "libaio.h" : YES 00:06:35.178 Library aio found: YES 00:06:35.178 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:35.178 Run-time dependency liburing found: YES 2.2 00:06:35.178 Dependency libvfn skipped: feature with-libvfn disabled 00:06:35.178 Found CMake: /usr/bin/cmake (3.27.7) 00:06:35.178 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:06:35.178 Subproject spdk : skipped: feature with-spdk disabled 00:06:35.178 Run-time dependency appleframeworks found: NO (tried framework) 00:06:35.178 Run-time dependency appleframeworks found: NO (tried framework) 00:06:35.178 Library rt found: YES 00:06:35.178 Checking for function "clock_gettime" with dependency -lrt: YES 00:06:35.178 Configuring xnvme_config.h using configuration 00:06:35.178 Configuring xnvme.spec using configuration 00:06:35.178 Run-time dependency bash-completion found: YES 2.11 00:06:35.178 Message: Bash-completions: /usr/share/bash-completion/completions 00:06:35.178 Program cp found: YES (/usr/bin/cp) 00:06:35.178 Build targets in project: 3 00:06:35.178 00:06:35.178 xnvme 0.7.5 00:06:35.178 00:06:35.178 Subprojects 00:06:35.178 spdk : NO Feature 'with-spdk' disabled 00:06:35.178 00:06:35.178 User defined options 00:06:35.178 examples : false 00:06:35.178 tests : false 00:06:35.178 tools : false 00:06:35.178 with-libaio : enabled 00:06:35.178 with-liburing: enabled 00:06:35.178 with-libvfn : disabled 00:06:35.178 with-spdk : disabled 00:06:35.178 00:06:35.178 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:35.743 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:06:35.743 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:06:35.743 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:06:35.743 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:06:35.743 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:06:35.743 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:06:35.743 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:06:35.743 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:06:35.743 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:06:35.743 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:06:35.743 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:06:35.743 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:06:35.743 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:06:35.743 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:06:35.743 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:06:35.743 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:06:35.743 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:06:35.743 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:06:35.743 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:06:36.000 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:06:36.000 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:06:36.000 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:06:36.000 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:06:36.000 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:06:36.000 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:06:36.000 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:06:36.000 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:06:36.000 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:06:36.000 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:06:36.000 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:06:36.000 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:06:36.000 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:06:36.000 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:06:36.000 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:06:36.000 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:06:36.000 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:06:36.000 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:06:36.000 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:06:36.000 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:06:36.000 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:06:36.000 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:06:36.000 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:06:36.000 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:06:36.000 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:06:36.000 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:06:36.000 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:06:36.000 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:06:36.000 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:06:36.000 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:06:36.000 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:06:36.258 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:06:36.258 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:06:36.258 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:06:36.258 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:06:36.258 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:06:36.258 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:06:36.258 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:06:36.258 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:06:36.258 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:06:36.258 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:06:36.258 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:06:36.258 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:06:36.258 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:06:36.258 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:06:36.258 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:06:36.258 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:06:36.516 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:06:36.516 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:06:36.516 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:06:36.516 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:06:36.516 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:06:36.516 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:06:36.516 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:06:36.516 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:06:37.085 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:06:37.085 [75/76] Linking static target lib/libxnvme.a 00:06:37.085 [76/76] Linking target lib/libxnvme.so.0.7.5 00:06:37.085 INFO: autodetecting backend as ninja 00:06:37.085 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:06:37.085 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:06:45.205 The Meson build system 00:06:45.205 Version: 1.5.0 00:06:45.205 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:45.205 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:45.205 Build type: native build 00:06:45.205 Program cat found: YES (/usr/bin/cat) 00:06:45.205 Project name: DPDK 00:06:45.205 Project version: 24.03.0 00:06:45.205 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:45.205 C linker for the host machine: cc ld.bfd 2.40-14 00:06:45.205 Host machine cpu family: x86_64 00:06:45.205 Host machine cpu: x86_64 00:06:45.205 Message: ## Building in Developer Mode ## 00:06:45.205 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:45.205 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:45.205 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:45.205 Program python3 found: YES (/usr/bin/python3) 00:06:45.205 Program cat found: YES (/usr/bin/cat) 00:06:45.205 Compiler for C supports arguments -march=native: YES 00:06:45.205 Checking for size of "void *" : 8 00:06:45.205 Checking for size of "void *" : 8 (cached) 00:06:45.205 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:45.205 Library m found: YES 00:06:45.205 Library numa found: YES 00:06:45.205 Has header "numaif.h" : YES 00:06:45.205 Library fdt found: NO 00:06:45.205 Library execinfo found: NO 00:06:45.205 Has header "execinfo.h" : YES 00:06:45.205 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:45.205 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:45.205 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:45.205 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:45.205 Run-time dependency openssl found: YES 3.1.1 00:06:45.205 Run-time dependency libpcap found: YES 1.10.4 00:06:45.205 Has header "pcap.h" with dependency libpcap: YES 00:06:45.205 Compiler for C supports arguments -Wcast-qual: YES 00:06:45.205 Compiler for C supports arguments -Wdeprecated: YES 00:06:45.205 Compiler for C supports arguments -Wformat: YES 00:06:45.205 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:45.205 Compiler for C supports arguments -Wformat-security: NO 00:06:45.205 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:45.205 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:45.205 Compiler for C supports arguments -Wnested-externs: YES 00:06:45.206 Compiler for C supports arguments -Wold-style-definition: YES 00:06:45.206 Compiler for C supports arguments -Wpointer-arith: YES 00:06:45.206 Compiler for C supports arguments -Wsign-compare: YES 00:06:45.206 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:45.206 Compiler for C supports arguments -Wundef: YES 00:06:45.206 Compiler for C supports arguments -Wwrite-strings: YES 00:06:45.206 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:45.206 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:45.206 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:45.206 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:45.206 Program objdump found: YES (/usr/bin/objdump) 00:06:45.206 Compiler for C supports arguments -mavx512f: YES 00:06:45.206 Checking if "AVX512 checking" compiles: YES 00:06:45.206 Fetching value of define "__SSE4_2__" : 1 00:06:45.206 Fetching value of define "__AES__" : 1 00:06:45.206 Fetching value of define "__AVX__" : 1 00:06:45.206 Fetching value of define "__AVX2__" : 1 00:06:45.206 Fetching value of define "__AVX512BW__" : 1 00:06:45.206 Fetching value of define "__AVX512CD__" : 1 00:06:45.206 Fetching value of define "__AVX512DQ__" : 1 00:06:45.206 Fetching value of define "__AVX512F__" : 1 00:06:45.206 Fetching value of define "__AVX512VL__" : 1 00:06:45.206 Fetching value of define "__PCLMUL__" : 1 00:06:45.206 Fetching value of define "__RDRND__" : 1 00:06:45.206 Fetching value of define "__RDSEED__" : 1 00:06:45.206 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:45.206 Fetching value of define "__znver1__" : (undefined) 00:06:45.206 Fetching value of define "__znver2__" : (undefined) 00:06:45.206 Fetching value of define "__znver3__" : (undefined) 00:06:45.206 Fetching value of define "__znver4__" : (undefined) 00:06:45.206 Library asan found: YES 00:06:45.206 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:45.206 Message: lib/log: Defining dependency "log" 00:06:45.206 Message: lib/kvargs: Defining dependency "kvargs" 00:06:45.206 Message: lib/telemetry: Defining dependency "telemetry" 00:06:45.206 Library rt found: YES 00:06:45.206 Checking for function "getentropy" : NO 00:06:45.206 Message: lib/eal: Defining dependency "eal" 00:06:45.206 Message: lib/ring: Defining dependency "ring" 00:06:45.206 Message: lib/rcu: Defining dependency "rcu" 00:06:45.206 Message: lib/mempool: Defining dependency "mempool" 00:06:45.206 Message: lib/mbuf: Defining dependency "mbuf" 00:06:45.206 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:45.206 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:45.206 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:45.206 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:45.206 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:45.206 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:06:45.206 Compiler for C supports arguments -mpclmul: YES 00:06:45.206 Compiler for C supports arguments -maes: YES 00:06:45.206 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:45.206 Compiler for C supports arguments -mavx512bw: YES 00:06:45.206 Compiler for C supports arguments -mavx512dq: YES 00:06:45.206 Compiler for C supports arguments -mavx512vl: YES 00:06:45.206 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:45.206 Compiler for C supports arguments -mavx2: YES 00:06:45.206 Compiler for C supports arguments -mavx: YES 00:06:45.206 Message: lib/net: Defining dependency "net" 00:06:45.206 Message: lib/meter: Defining dependency "meter" 00:06:45.206 Message: lib/ethdev: Defining dependency "ethdev" 00:06:45.206 Message: lib/pci: Defining dependency "pci" 00:06:45.206 Message: lib/cmdline: Defining dependency "cmdline" 00:06:45.206 Message: lib/hash: Defining dependency "hash" 00:06:45.206 Message: lib/timer: Defining dependency "timer" 00:06:45.206 Message: lib/compressdev: Defining dependency "compressdev" 00:06:45.206 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:45.206 Message: lib/dmadev: Defining dependency "dmadev" 00:06:45.206 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:45.206 Message: lib/power: Defining dependency "power" 00:06:45.206 Message: lib/reorder: Defining dependency "reorder" 00:06:45.206 Message: lib/security: Defining dependency "security" 00:06:45.206 Has header "linux/userfaultfd.h" : YES 00:06:45.206 Has header "linux/vduse.h" : YES 00:06:45.206 Message: lib/vhost: Defining dependency "vhost" 00:06:45.206 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:45.206 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:45.206 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:45.206 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:45.206 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:45.206 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:45.206 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:45.206 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:45.206 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:45.206 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:45.206 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:45.206 Configuring doxy-api-html.conf using configuration 00:06:45.206 Configuring doxy-api-man.conf using configuration 00:06:45.206 Program mandb found: YES (/usr/bin/mandb) 00:06:45.206 Program sphinx-build found: NO 00:06:45.206 Configuring rte_build_config.h using configuration 00:06:45.206 Message: 00:06:45.206 ================= 00:06:45.206 Applications Enabled 00:06:45.206 ================= 00:06:45.206 00:06:45.206 apps: 00:06:45.206 00:06:45.206 00:06:45.206 Message: 00:06:45.206 ================= 00:06:45.206 Libraries Enabled 00:06:45.206 ================= 00:06:45.206 00:06:45.206 libs: 00:06:45.206 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:45.206 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:45.206 cryptodev, dmadev, power, reorder, security, vhost, 00:06:45.206 00:06:45.206 Message: 00:06:45.206 =============== 00:06:45.206 Drivers Enabled 00:06:45.206 =============== 00:06:45.206 00:06:45.206 common: 00:06:45.206 00:06:45.206 bus: 00:06:45.206 pci, vdev, 00:06:45.206 mempool: 00:06:45.206 ring, 00:06:45.206 dma: 00:06:45.206 00:06:45.206 net: 00:06:45.206 00:06:45.206 crypto: 00:06:45.206 00:06:45.206 compress: 00:06:45.206 00:06:45.206 vdpa: 00:06:45.206 00:06:45.206 00:06:45.206 Message: 00:06:45.206 ================= 00:06:45.206 Content Skipped 00:06:45.206 ================= 00:06:45.206 00:06:45.206 apps: 00:06:45.206 dumpcap: explicitly disabled via build config 00:06:45.206 graph: explicitly disabled via build config 00:06:45.206 pdump: explicitly disabled via build config 00:06:45.206 proc-info: explicitly disabled via build config 00:06:45.206 test-acl: explicitly disabled via build config 00:06:45.206 test-bbdev: explicitly disabled via build config 00:06:45.206 test-cmdline: explicitly disabled via build config 00:06:45.206 test-compress-perf: explicitly disabled via build config 00:06:45.206 test-crypto-perf: explicitly disabled via build config 00:06:45.206 test-dma-perf: explicitly disabled via build config 00:06:45.206 test-eventdev: explicitly disabled via build config 00:06:45.206 test-fib: explicitly disabled via build config 00:06:45.206 test-flow-perf: explicitly disabled via build config 00:06:45.206 test-gpudev: explicitly disabled via build config 00:06:45.206 test-mldev: explicitly disabled via build config 00:06:45.206 test-pipeline: explicitly disabled via build config 00:06:45.206 test-pmd: explicitly disabled via build config 00:06:45.206 test-regex: explicitly disabled via build config 00:06:45.206 test-sad: explicitly disabled via build config 00:06:45.206 test-security-perf: explicitly disabled via build config 00:06:45.206 00:06:45.206 libs: 00:06:45.206 argparse: explicitly disabled via build config 00:06:45.206 metrics: explicitly disabled via build config 00:06:45.206 acl: explicitly disabled via build config 00:06:45.206 bbdev: explicitly disabled via build config 00:06:45.206 bitratestats: explicitly disabled via build config 00:06:45.206 bpf: explicitly disabled via build config 00:06:45.206 cfgfile: explicitly disabled via build config 00:06:45.206 distributor: explicitly disabled via build config 00:06:45.206 efd: explicitly disabled via build config 00:06:45.206 eventdev: explicitly disabled via build config 00:06:45.206 dispatcher: explicitly disabled via build config 00:06:45.206 gpudev: explicitly disabled via build config 00:06:45.206 gro: explicitly disabled via build config 00:06:45.206 gso: explicitly disabled via build config 00:06:45.206 ip_frag: explicitly disabled via build config 00:06:45.206 jobstats: explicitly disabled via build config 00:06:45.206 latencystats: explicitly disabled via build config 00:06:45.206 lpm: explicitly disabled via build config 00:06:45.206 member: explicitly disabled via build config 00:06:45.206 pcapng: explicitly disabled via build config 00:06:45.206 rawdev: explicitly disabled via build config 00:06:45.206 regexdev: explicitly disabled via build config 00:06:45.206 mldev: explicitly disabled via build config 00:06:45.206 rib: explicitly disabled via build config 00:06:45.206 sched: explicitly disabled via build config 00:06:45.206 stack: explicitly disabled via build config 00:06:45.206 ipsec: explicitly disabled via build config 00:06:45.206 pdcp: explicitly disabled via build config 00:06:45.206 fib: explicitly disabled via build config 00:06:45.206 port: explicitly disabled via build config 00:06:45.206 pdump: explicitly disabled via build config 00:06:45.206 table: explicitly disabled via build config 00:06:45.206 pipeline: explicitly disabled via build config 00:06:45.206 graph: explicitly disabled via build config 00:06:45.206 node: explicitly disabled via build config 00:06:45.206 00:06:45.206 drivers: 00:06:45.206 common/cpt: not in enabled drivers build config 00:06:45.206 common/dpaax: not in enabled drivers build config 00:06:45.206 common/iavf: not in enabled drivers build config 00:06:45.206 common/idpf: not in enabled drivers build config 00:06:45.206 common/ionic: not in enabled drivers build config 00:06:45.206 common/mvep: not in enabled drivers build config 00:06:45.206 common/octeontx: not in enabled drivers build config 00:06:45.206 bus/auxiliary: not in enabled drivers build config 00:06:45.206 bus/cdx: not in enabled drivers build config 00:06:45.206 bus/dpaa: not in enabled drivers build config 00:06:45.206 bus/fslmc: not in enabled drivers build config 00:06:45.207 bus/ifpga: not in enabled drivers build config 00:06:45.207 bus/platform: not in enabled drivers build config 00:06:45.207 bus/uacce: not in enabled drivers build config 00:06:45.207 bus/vmbus: not in enabled drivers build config 00:06:45.207 common/cnxk: not in enabled drivers build config 00:06:45.207 common/mlx5: not in enabled drivers build config 00:06:45.207 common/nfp: not in enabled drivers build config 00:06:45.207 common/nitrox: not in enabled drivers build config 00:06:45.207 common/qat: not in enabled drivers build config 00:06:45.207 common/sfc_efx: not in enabled drivers build config 00:06:45.207 mempool/bucket: not in enabled drivers build config 00:06:45.207 mempool/cnxk: not in enabled drivers build config 00:06:45.207 mempool/dpaa: not in enabled drivers build config 00:06:45.207 mempool/dpaa2: not in enabled drivers build config 00:06:45.207 mempool/octeontx: not in enabled drivers build config 00:06:45.207 mempool/stack: not in enabled drivers build config 00:06:45.207 dma/cnxk: not in enabled drivers build config 00:06:45.207 dma/dpaa: not in enabled drivers build config 00:06:45.207 dma/dpaa2: not in enabled drivers build config 00:06:45.207 dma/hisilicon: not in enabled drivers build config 00:06:45.207 dma/idxd: not in enabled drivers build config 00:06:45.207 dma/ioat: not in enabled drivers build config 00:06:45.207 dma/skeleton: not in enabled drivers build config 00:06:45.207 net/af_packet: not in enabled drivers build config 00:06:45.207 net/af_xdp: not in enabled drivers build config 00:06:45.207 net/ark: not in enabled drivers build config 00:06:45.207 net/atlantic: not in enabled drivers build config 00:06:45.207 net/avp: not in enabled drivers build config 00:06:45.207 net/axgbe: not in enabled drivers build config 00:06:45.207 net/bnx2x: not in enabled drivers build config 00:06:45.207 net/bnxt: not in enabled drivers build config 00:06:45.207 net/bonding: not in enabled drivers build config 00:06:45.207 net/cnxk: not in enabled drivers build config 00:06:45.207 net/cpfl: not in enabled drivers build config 00:06:45.207 net/cxgbe: not in enabled drivers build config 00:06:45.207 net/dpaa: not in enabled drivers build config 00:06:45.207 net/dpaa2: not in enabled drivers build config 00:06:45.207 net/e1000: not in enabled drivers build config 00:06:45.207 net/ena: not in enabled drivers build config 00:06:45.207 net/enetc: not in enabled drivers build config 00:06:45.207 net/enetfec: not in enabled drivers build config 00:06:45.207 net/enic: not in enabled drivers build config 00:06:45.207 net/failsafe: not in enabled drivers build config 00:06:45.207 net/fm10k: not in enabled drivers build config 00:06:45.207 net/gve: not in enabled drivers build config 00:06:45.207 net/hinic: not in enabled drivers build config 00:06:45.207 net/hns3: not in enabled drivers build config 00:06:45.207 net/i40e: not in enabled drivers build config 00:06:45.207 net/iavf: not in enabled drivers build config 00:06:45.207 net/ice: not in enabled drivers build config 00:06:45.207 net/idpf: not in enabled drivers build config 00:06:45.207 net/igc: not in enabled drivers build config 00:06:45.207 net/ionic: not in enabled drivers build config 00:06:45.207 net/ipn3ke: not in enabled drivers build config 00:06:45.207 net/ixgbe: not in enabled drivers build config 00:06:45.207 net/mana: not in enabled drivers build config 00:06:45.207 net/memif: not in enabled drivers build config 00:06:45.207 net/mlx4: not in enabled drivers build config 00:06:45.207 net/mlx5: not in enabled drivers build config 00:06:45.207 net/mvneta: not in enabled drivers build config 00:06:45.207 net/mvpp2: not in enabled drivers build config 00:06:45.207 net/netvsc: not in enabled drivers build config 00:06:45.207 net/nfb: not in enabled drivers build config 00:06:45.207 net/nfp: not in enabled drivers build config 00:06:45.207 net/ngbe: not in enabled drivers build config 00:06:45.207 net/null: not in enabled drivers build config 00:06:45.207 net/octeontx: not in enabled drivers build config 00:06:45.207 net/octeon_ep: not in enabled drivers build config 00:06:45.207 net/pcap: not in enabled drivers build config 00:06:45.207 net/pfe: not in enabled drivers build config 00:06:45.207 net/qede: not in enabled drivers build config 00:06:45.207 net/ring: not in enabled drivers build config 00:06:45.207 net/sfc: not in enabled drivers build config 00:06:45.207 net/softnic: not in enabled drivers build config 00:06:45.207 net/tap: not in enabled drivers build config 00:06:45.207 net/thunderx: not in enabled drivers build config 00:06:45.207 net/txgbe: not in enabled drivers build config 00:06:45.207 net/vdev_netvsc: not in enabled drivers build config 00:06:45.207 net/vhost: not in enabled drivers build config 00:06:45.207 net/virtio: not in enabled drivers build config 00:06:45.207 net/vmxnet3: not in enabled drivers build config 00:06:45.207 raw/*: missing internal dependency, "rawdev" 00:06:45.207 crypto/armv8: not in enabled drivers build config 00:06:45.207 crypto/bcmfs: not in enabled drivers build config 00:06:45.207 crypto/caam_jr: not in enabled drivers build config 00:06:45.207 crypto/ccp: not in enabled drivers build config 00:06:45.207 crypto/cnxk: not in enabled drivers build config 00:06:45.207 crypto/dpaa_sec: not in enabled drivers build config 00:06:45.207 crypto/dpaa2_sec: not in enabled drivers build config 00:06:45.207 crypto/ipsec_mb: not in enabled drivers build config 00:06:45.207 crypto/mlx5: not in enabled drivers build config 00:06:45.207 crypto/mvsam: not in enabled drivers build config 00:06:45.207 crypto/nitrox: not in enabled drivers build config 00:06:45.207 crypto/null: not in enabled drivers build config 00:06:45.207 crypto/octeontx: not in enabled drivers build config 00:06:45.207 crypto/openssl: not in enabled drivers build config 00:06:45.207 crypto/scheduler: not in enabled drivers build config 00:06:45.207 crypto/uadk: not in enabled drivers build config 00:06:45.207 crypto/virtio: not in enabled drivers build config 00:06:45.207 compress/isal: not in enabled drivers build config 00:06:45.207 compress/mlx5: not in enabled drivers build config 00:06:45.207 compress/nitrox: not in enabled drivers build config 00:06:45.207 compress/octeontx: not in enabled drivers build config 00:06:45.207 compress/zlib: not in enabled drivers build config 00:06:45.207 regex/*: missing internal dependency, "regexdev" 00:06:45.207 ml/*: missing internal dependency, "mldev" 00:06:45.207 vdpa/ifc: not in enabled drivers build config 00:06:45.207 vdpa/mlx5: not in enabled drivers build config 00:06:45.207 vdpa/nfp: not in enabled drivers build config 00:06:45.207 vdpa/sfc: not in enabled drivers build config 00:06:45.207 event/*: missing internal dependency, "eventdev" 00:06:45.207 baseband/*: missing internal dependency, "bbdev" 00:06:45.207 gpu/*: missing internal dependency, "gpudev" 00:06:45.207 00:06:45.207 00:06:45.207 Build targets in project: 85 00:06:45.207 00:06:45.207 DPDK 24.03.0 00:06:45.207 00:06:45.207 User defined options 00:06:45.207 buildtype : debug 00:06:45.207 default_library : shared 00:06:45.207 libdir : lib 00:06:45.207 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:45.207 b_sanitize : address 00:06:45.207 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:45.207 c_link_args : 00:06:45.207 cpu_instruction_set: native 00:06:45.207 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:45.207 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:45.207 enable_docs : false 00:06:45.207 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:45.207 enable_kmods : false 00:06:45.207 max_lcores : 128 00:06:45.207 tests : false 00:06:45.207 00:06:45.207 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:45.467 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:45.726 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:45.726 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:45.726 [3/268] Linking static target lib/librte_kvargs.a 00:06:45.726 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:45.726 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:45.726 [6/268] Linking static target lib/librte_log.a 00:06:46.295 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:46.295 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:46.295 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:46.295 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:46.295 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:46.295 [12/268] Linking static target lib/librte_telemetry.a 00:06:46.295 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:46.295 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:46.295 [15/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:46.295 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:46.586 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:46.586 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:46.852 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:46.852 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:46.852 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:46.852 [22/268] Linking target lib/librte_log.so.24.1 00:06:46.852 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:46.852 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:46.852 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:46.852 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:47.111 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:47.111 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:47.111 [29/268] Linking target lib/librte_kvargs.so.24.1 00:06:47.111 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:47.111 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:47.369 [32/268] Linking target lib/librte_telemetry.so.24.1 00:06:47.369 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:47.369 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:47.369 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:47.626 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:47.626 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:47.626 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:47.626 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:47.626 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:47.626 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:47.626 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:47.626 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:47.626 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:48.190 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:48.190 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:48.190 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:48.447 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:48.447 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:48.447 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:48.447 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:48.447 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:48.447 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:48.447 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:48.706 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:48.706 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:48.963 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:48.963 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:48.963 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:48.963 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:48.963 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:48.963 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:48.963 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:48.963 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:49.221 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:49.221 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:49.480 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:49.480 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:49.740 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:49.740 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:49.740 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:49.740 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:49.740 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:49.740 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:49.740 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:49.998 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:49.998 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:49.998 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:49.998 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:50.256 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:50.256 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:50.256 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:50.538 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:50.538 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:50.538 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:50.538 [86/268] Linking static target lib/librte_ring.a 00:06:50.538 [87/268] Linking static target lib/librte_eal.a 00:06:50.538 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:50.869 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:50.869 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:50.869 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:50.869 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:50.869 [93/268] Linking static target lib/librte_mempool.a 00:06:51.128 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:51.128 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:51.128 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:51.128 [97/268] Linking static target lib/librte_rcu.a 00:06:51.128 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:51.388 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:51.388 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:51.388 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:51.647 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:51.647 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:51.647 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:51.647 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:51.647 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:51.906 [107/268] Linking static target lib/librte_net.a 00:06:51.906 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:51.906 [109/268] Linking static target lib/librte_meter.a 00:06:52.165 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:52.165 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:52.165 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:52.165 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:52.165 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:52.165 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:52.425 [116/268] Linking static target lib/librte_mbuf.a 00:06:52.425 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:52.425 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:52.684 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:52.943 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:52.943 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:52.943 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:53.199 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:53.456 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:53.456 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:53.456 [126/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:53.456 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:53.456 [128/268] Linking static target lib/librte_pci.a 00:06:53.456 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:53.456 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:53.715 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:53.715 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:53.715 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:53.715 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:53.715 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:53.715 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:53.715 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:53.715 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:53.975 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:53.975 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:53.975 [141/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:53.975 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:53.975 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:53.975 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:53.975 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:53.975 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:54.234 [147/268] Linking static target lib/librte_cmdline.a 00:06:54.234 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:54.493 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:54.759 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:54.759 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:55.102 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:55.102 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:55.102 [154/268] Linking static target lib/librte_timer.a 00:06:55.102 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:55.102 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:55.102 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:55.102 [158/268] Linking static target lib/librte_ethdev.a 00:06:55.102 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:55.102 [160/268] Linking static target lib/librte_compressdev.a 00:06:55.359 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:55.359 [162/268] Linking static target lib/librte_hash.a 00:06:55.359 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:55.359 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:55.618 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:55.618 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.618 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:55.618 [168/268] Linking static target lib/librte_dmadev.a 00:06:55.618 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:55.877 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:55.877 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:55.877 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:56.136 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:56.136 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:56.395 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:56.395 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:56.395 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:56.655 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:56.655 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:56.655 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:56.655 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:56.655 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:56.655 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:56.655 [184/268] Linking static target lib/librte_cryptodev.a 00:06:56.913 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:56.913 [186/268] Linking static target lib/librte_power.a 00:06:57.172 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:57.430 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:57.430 [189/268] Linking static target lib/librte_reorder.a 00:06:57.430 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:57.430 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:57.430 [192/268] Linking static target lib/librte_security.a 00:06:57.430 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:57.689 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:57.949 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:58.208 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:58.208 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:58.468 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:58.468 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:58.468 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:58.468 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:58.727 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:58.985 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:58.985 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:58.985 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:59.243 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:59.243 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:59.243 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:59.243 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:59.243 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:59.243 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:59.502 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:59.502 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:59.502 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:59.502 [215/268] Linking static target drivers/librte_bus_pci.a 00:06:59.502 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:59.760 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:59.760 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:59.760 [219/268] Linking static target drivers/librte_bus_vdev.a 00:06:59.760 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:59.760 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:00.019 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:00.019 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:00.019 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:00.019 [225/268] Linking static target drivers/librte_mempool_ring.a 00:07:00.019 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:00.278 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:01.222 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:02.625 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.625 [230/268] Linking target lib/librte_eal.so.24.1 00:07:02.884 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:02.884 [232/268] Linking target lib/librte_meter.so.24.1 00:07:02.884 [233/268] Linking target lib/librte_pci.so.24.1 00:07:02.884 [234/268] Linking target lib/librte_ring.so.24.1 00:07:02.884 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:02.884 [236/268] Linking target lib/librte_dmadev.so.24.1 00:07:02.884 [237/268] Linking target lib/librte_timer.so.24.1 00:07:02.884 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:02.884 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:03.142 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:03.142 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:03.142 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:03.142 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:03.142 [244/268] Linking target lib/librte_rcu.so.24.1 00:07:03.142 [245/268] Linking target lib/librte_mempool.so.24.1 00:07:03.142 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:03.142 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:03.142 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:03.401 [249/268] Linking target lib/librte_mbuf.so.24.1 00:07:03.401 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:03.401 [251/268] Linking target lib/librte_compressdev.so.24.1 00:07:03.660 [252/268] Linking target lib/librte_reorder.so.24.1 00:07:03.660 [253/268] Linking target lib/librte_net.so.24.1 00:07:03.660 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:07:03.660 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:03.660 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:03.660 [257/268] Linking target lib/librte_hash.so.24.1 00:07:03.660 [258/268] Linking target lib/librte_cmdline.so.24.1 00:07:03.660 [259/268] Linking target lib/librte_security.so.24.1 00:07:03.919 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:04.861 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.861 [262/268] Linking target lib/librte_ethdev.so.24.1 00:07:04.861 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:05.129 [264/268] Linking target lib/librte_power.so.24.1 00:07:05.697 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:05.955 [266/268] Linking static target lib/librte_vhost.a 00:07:08.486 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.486 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:08.486 INFO: autodetecting backend as ninja 00:07:08.487 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:26.581 CC lib/log/log_flags.o 00:07:26.581 CC lib/log/log.o 00:07:26.581 CC lib/log/log_deprecated.o 00:07:26.581 CC lib/ut_mock/mock.o 00:07:26.581 CC lib/ut/ut.o 00:07:26.581 LIB libspdk_ut_mock.a 00:07:26.581 LIB libspdk_log.a 00:07:26.581 SO libspdk_ut_mock.so.6.0 00:07:26.581 LIB libspdk_ut.a 00:07:26.581 SO libspdk_ut.so.2.0 00:07:26.581 SO libspdk_log.so.7.1 00:07:26.581 SYMLINK libspdk_ut_mock.so 00:07:26.581 SYMLINK libspdk_ut.so 00:07:26.581 SYMLINK libspdk_log.so 00:07:26.581 CC lib/util/crc16.o 00:07:26.581 CC lib/util/base64.o 00:07:26.581 CC lib/util/crc32c.o 00:07:26.581 CC lib/util/bit_array.o 00:07:26.581 CC lib/util/cpuset.o 00:07:26.581 CC lib/util/crc32.o 00:07:26.581 CXX lib/trace_parser/trace.o 00:07:26.581 CC lib/dma/dma.o 00:07:26.581 CC lib/ioat/ioat.o 00:07:26.581 CC lib/vfio_user/host/vfio_user_pci.o 00:07:26.581 CC lib/util/crc32_ieee.o 00:07:26.581 CC lib/util/crc64.o 00:07:26.581 CC lib/vfio_user/host/vfio_user.o 00:07:26.581 CC lib/util/dif.o 00:07:26.581 CC lib/util/fd.o 00:07:26.581 CC lib/util/fd_group.o 00:07:26.581 LIB libspdk_dma.a 00:07:26.581 SO libspdk_dma.so.5.0 00:07:26.581 CC lib/util/file.o 00:07:26.581 CC lib/util/hexlify.o 00:07:26.581 SYMLINK libspdk_dma.so 00:07:26.581 CC lib/util/iov.o 00:07:26.581 LIB libspdk_ioat.a 00:07:26.581 CC lib/util/math.o 00:07:26.581 SO libspdk_ioat.so.7.0 00:07:26.581 LIB libspdk_vfio_user.a 00:07:26.581 CC lib/util/net.o 00:07:26.581 SYMLINK libspdk_ioat.so 00:07:26.581 CC lib/util/pipe.o 00:07:26.581 SO libspdk_vfio_user.so.5.0 00:07:26.581 CC lib/util/strerror_tls.o 00:07:26.581 CC lib/util/string.o 00:07:26.581 SYMLINK libspdk_vfio_user.so 00:07:26.581 CC lib/util/uuid.o 00:07:26.581 CC lib/util/xor.o 00:07:26.847 CC lib/util/zipf.o 00:07:26.847 CC lib/util/md5.o 00:07:27.117 LIB libspdk_util.a 00:07:27.389 SO libspdk_util.so.10.1 00:07:27.389 LIB libspdk_trace_parser.a 00:07:27.389 SO libspdk_trace_parser.so.6.0 00:07:27.389 SYMLINK libspdk_util.so 00:07:27.389 SYMLINK libspdk_trace_parser.so 00:07:27.660 CC lib/conf/conf.o 00:07:27.660 CC lib/idxd/idxd.o 00:07:27.660 CC lib/idxd/idxd_user.o 00:07:27.660 CC lib/idxd/idxd_kernel.o 00:07:27.660 CC lib/vmd/vmd.o 00:07:27.660 CC lib/vmd/led.o 00:07:27.660 CC lib/rdma_utils/rdma_utils.o 00:07:27.660 CC lib/env_dpdk/memory.o 00:07:27.660 CC lib/env_dpdk/env.o 00:07:27.660 CC lib/json/json_parse.o 00:07:27.660 CC lib/json/json_util.o 00:07:27.923 CC lib/json/json_write.o 00:07:27.923 LIB libspdk_conf.a 00:07:27.923 CC lib/env_dpdk/pci.o 00:07:27.923 CC lib/env_dpdk/init.o 00:07:27.923 SO libspdk_conf.so.6.0 00:07:27.923 LIB libspdk_rdma_utils.a 00:07:27.923 SO libspdk_rdma_utils.so.1.0 00:07:27.923 SYMLINK libspdk_conf.so 00:07:27.923 CC lib/env_dpdk/threads.o 00:07:27.923 SYMLINK libspdk_rdma_utils.so 00:07:27.923 CC lib/env_dpdk/pci_ioat.o 00:07:28.181 CC lib/env_dpdk/pci_virtio.o 00:07:28.181 CC lib/env_dpdk/pci_vmd.o 00:07:28.181 LIB libspdk_json.a 00:07:28.181 SO libspdk_json.so.6.0 00:07:28.181 CC lib/env_dpdk/pci_idxd.o 00:07:28.181 CC lib/env_dpdk/pci_event.o 00:07:28.181 SYMLINK libspdk_json.so 00:07:28.181 CC lib/env_dpdk/sigbus_handler.o 00:07:28.440 CC lib/env_dpdk/pci_dpdk.o 00:07:28.440 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:28.440 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:28.440 LIB libspdk_idxd.a 00:07:28.441 SO libspdk_idxd.so.12.1 00:07:28.441 CC lib/rdma_provider/common.o 00:07:28.441 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:28.441 LIB libspdk_vmd.a 00:07:28.441 SO libspdk_vmd.so.6.0 00:07:28.441 CC lib/jsonrpc/jsonrpc_server.o 00:07:28.441 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:28.441 SYMLINK libspdk_idxd.so 00:07:28.441 CC lib/jsonrpc/jsonrpc_client.o 00:07:28.441 SYMLINK libspdk_vmd.so 00:07:28.441 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:28.700 LIB libspdk_rdma_provider.a 00:07:28.700 SO libspdk_rdma_provider.so.7.0 00:07:28.700 LIB libspdk_jsonrpc.a 00:07:28.700 SYMLINK libspdk_rdma_provider.so 00:07:28.960 SO libspdk_jsonrpc.so.6.0 00:07:28.960 SYMLINK libspdk_jsonrpc.so 00:07:29.219 CC lib/rpc/rpc.o 00:07:29.479 LIB libspdk_env_dpdk.a 00:07:29.479 LIB libspdk_rpc.a 00:07:29.738 SO libspdk_env_dpdk.so.15.1 00:07:29.738 SO libspdk_rpc.so.6.0 00:07:29.738 SYMLINK libspdk_rpc.so 00:07:29.738 SYMLINK libspdk_env_dpdk.so 00:07:29.998 CC lib/notify/notify_rpc.o 00:07:29.998 CC lib/notify/notify.o 00:07:29.998 CC lib/keyring/keyring_rpc.o 00:07:29.998 CC lib/keyring/keyring.o 00:07:29.998 CC lib/trace/trace.o 00:07:29.998 CC lib/trace/trace_flags.o 00:07:29.998 CC lib/trace/trace_rpc.o 00:07:30.258 LIB libspdk_notify.a 00:07:30.258 SO libspdk_notify.so.6.0 00:07:30.258 LIB libspdk_keyring.a 00:07:30.258 LIB libspdk_trace.a 00:07:30.258 SO libspdk_keyring.so.2.0 00:07:30.258 SYMLINK libspdk_notify.so 00:07:30.516 SO libspdk_trace.so.11.0 00:07:30.516 SYMLINK libspdk_keyring.so 00:07:30.516 SYMLINK libspdk_trace.so 00:07:30.774 CC lib/thread/thread.o 00:07:30.774 CC lib/thread/iobuf.o 00:07:30.774 CC lib/sock/sock.o 00:07:30.774 CC lib/sock/sock_rpc.o 00:07:31.341 LIB libspdk_sock.a 00:07:31.341 SO libspdk_sock.so.10.0 00:07:31.599 SYMLINK libspdk_sock.so 00:07:31.858 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:31.858 CC lib/nvme/nvme_fabric.o 00:07:31.858 CC lib/nvme/nvme_ctrlr.o 00:07:31.858 CC lib/nvme/nvme_pcie.o 00:07:31.858 CC lib/nvme/nvme_ns_cmd.o 00:07:31.858 CC lib/nvme/nvme_pcie_common.o 00:07:31.858 CC lib/nvme/nvme_ns.o 00:07:31.858 CC lib/nvme/nvme.o 00:07:31.858 CC lib/nvme/nvme_qpair.o 00:07:32.795 CC lib/nvme/nvme_quirks.o 00:07:32.795 CC lib/nvme/nvme_transport.o 00:07:32.795 CC lib/nvme/nvme_discovery.o 00:07:32.795 LIB libspdk_thread.a 00:07:32.795 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:32.795 SO libspdk_thread.so.11.0 00:07:32.795 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:32.795 CC lib/nvme/nvme_tcp.o 00:07:32.795 SYMLINK libspdk_thread.so 00:07:32.795 CC lib/nvme/nvme_opal.o 00:07:33.055 CC lib/accel/accel.o 00:07:33.055 CC lib/nvme/nvme_io_msg.o 00:07:33.055 CC lib/nvme/nvme_poll_group.o 00:07:33.314 CC lib/nvme/nvme_zns.o 00:07:33.314 CC lib/nvme/nvme_stubs.o 00:07:33.314 CC lib/nvme/nvme_auth.o 00:07:33.574 CC lib/nvme/nvme_cuse.o 00:07:33.574 CC lib/nvme/nvme_rdma.o 00:07:33.833 CC lib/accel/accel_rpc.o 00:07:33.833 CC lib/accel/accel_sw.o 00:07:34.091 CC lib/blob/blobstore.o 00:07:34.092 CC lib/blob/request.o 00:07:34.092 CC lib/init/json_config.o 00:07:34.352 CC lib/init/subsystem.o 00:07:34.352 CC lib/init/subsystem_rpc.o 00:07:34.352 LIB libspdk_accel.a 00:07:34.611 CC lib/blob/zeroes.o 00:07:34.611 SO libspdk_accel.so.16.0 00:07:34.611 CC lib/blob/blob_bs_dev.o 00:07:34.611 CC lib/virtio/virtio.o 00:07:34.611 CC lib/init/rpc.o 00:07:34.611 SYMLINK libspdk_accel.so 00:07:34.611 CC lib/virtio/virtio_vhost_user.o 00:07:34.611 CC lib/virtio/virtio_vfio_user.o 00:07:34.871 LIB libspdk_init.a 00:07:34.871 CC lib/virtio/virtio_pci.o 00:07:34.871 CC lib/fsdev/fsdev.o 00:07:34.871 SO libspdk_init.so.6.0 00:07:34.871 CC lib/bdev/bdev.o 00:07:34.871 SYMLINK libspdk_init.so 00:07:34.871 CC lib/bdev/bdev_rpc.o 00:07:34.871 CC lib/bdev/bdev_zone.o 00:07:35.133 CC lib/bdev/part.o 00:07:35.133 CC lib/fsdev/fsdev_io.o 00:07:35.133 CC lib/event/app.o 00:07:35.133 LIB libspdk_virtio.a 00:07:35.133 SO libspdk_virtio.so.7.0 00:07:35.133 CC lib/event/reactor.o 00:07:35.394 CC lib/event/log_rpc.o 00:07:35.394 LIB libspdk_nvme.a 00:07:35.394 SYMLINK libspdk_virtio.so 00:07:35.394 CC lib/event/app_rpc.o 00:07:35.394 CC lib/event/scheduler_static.o 00:07:35.394 CC lib/fsdev/fsdev_rpc.o 00:07:35.652 SO libspdk_nvme.so.15.0 00:07:35.652 CC lib/bdev/scsi_nvme.o 00:07:35.652 LIB libspdk_fsdev.a 00:07:35.652 SO libspdk_fsdev.so.2.0 00:07:35.909 SYMLINK libspdk_nvme.so 00:07:35.909 LIB libspdk_event.a 00:07:35.909 SYMLINK libspdk_fsdev.so 00:07:35.909 SO libspdk_event.so.14.0 00:07:35.909 SYMLINK libspdk_event.so 00:07:36.168 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:37.102 LIB libspdk_fuse_dispatcher.a 00:07:37.102 SO libspdk_fuse_dispatcher.so.1.0 00:07:37.102 SYMLINK libspdk_fuse_dispatcher.so 00:07:38.480 LIB libspdk_blob.a 00:07:38.480 SO libspdk_blob.so.12.0 00:07:38.480 LIB libspdk_bdev.a 00:07:38.480 SO libspdk_bdev.so.17.0 00:07:38.480 SYMLINK libspdk_blob.so 00:07:38.739 SYMLINK libspdk_bdev.so 00:07:38.740 CC lib/lvol/lvol.o 00:07:38.740 CC lib/blobfs/tree.o 00:07:38.740 CC lib/blobfs/blobfs.o 00:07:38.999 CC lib/ftl/ftl_core.o 00:07:38.999 CC lib/ftl/ftl_init.o 00:07:38.999 CC lib/ftl/ftl_layout.o 00:07:38.999 CC lib/nbd/nbd.o 00:07:38.999 CC lib/nvmf/ctrlr.o 00:07:38.999 CC lib/ublk/ublk.o 00:07:38.999 CC lib/scsi/dev.o 00:07:38.999 CC lib/scsi/lun.o 00:07:38.999 CC lib/scsi/port.o 00:07:39.299 CC lib/nvmf/ctrlr_discovery.o 00:07:39.299 CC lib/scsi/scsi.o 00:07:39.299 CC lib/scsi/scsi_bdev.o 00:07:39.299 CC lib/ftl/ftl_debug.o 00:07:39.299 CC lib/nvmf/ctrlr_bdev.o 00:07:39.567 CC lib/nbd/nbd_rpc.o 00:07:39.567 CC lib/scsi/scsi_pr.o 00:07:39.567 LIB libspdk_nbd.a 00:07:39.567 CC lib/ftl/ftl_io.o 00:07:39.567 SO libspdk_nbd.so.7.0 00:07:39.567 CC lib/ublk/ublk_rpc.o 00:07:39.826 SYMLINK libspdk_nbd.so 00:07:39.826 CC lib/scsi/scsi_rpc.o 00:07:39.826 CC lib/nvmf/subsystem.o 00:07:39.826 CC lib/scsi/task.o 00:07:39.826 CC lib/nvmf/nvmf.o 00:07:39.826 LIB libspdk_ublk.a 00:07:39.826 CC lib/nvmf/nvmf_rpc.o 00:07:39.826 CC lib/ftl/ftl_sb.o 00:07:39.826 LIB libspdk_blobfs.a 00:07:39.826 SO libspdk_ublk.so.3.0 00:07:40.085 SO libspdk_blobfs.so.11.0 00:07:40.085 SYMLINK libspdk_ublk.so 00:07:40.085 CC lib/ftl/ftl_l2p.o 00:07:40.085 LIB libspdk_lvol.a 00:07:40.085 SYMLINK libspdk_blobfs.so 00:07:40.085 CC lib/ftl/ftl_l2p_flat.o 00:07:40.085 LIB libspdk_scsi.a 00:07:40.085 SO libspdk_lvol.so.11.0 00:07:40.085 SO libspdk_scsi.so.9.0 00:07:40.085 SYMLINK libspdk_lvol.so 00:07:40.085 CC lib/ftl/ftl_nv_cache.o 00:07:40.085 CC lib/nvmf/transport.o 00:07:40.344 SYMLINK libspdk_scsi.so 00:07:40.344 CC lib/ftl/ftl_band.o 00:07:40.344 CC lib/iscsi/conn.o 00:07:40.344 CC lib/vhost/vhost.o 00:07:40.603 CC lib/vhost/vhost_rpc.o 00:07:40.862 CC lib/vhost/vhost_scsi.o 00:07:40.862 CC lib/ftl/ftl_band_ops.o 00:07:41.121 CC lib/vhost/vhost_blk.o 00:07:41.121 CC lib/nvmf/tcp.o 00:07:41.121 CC lib/nvmf/stubs.o 00:07:41.380 CC lib/iscsi/init_grp.o 00:07:41.380 CC lib/nvmf/mdns_server.o 00:07:41.380 CC lib/ftl/ftl_writer.o 00:07:41.380 CC lib/ftl/ftl_rq.o 00:07:41.380 CC lib/nvmf/rdma.o 00:07:41.638 CC lib/iscsi/iscsi.o 00:07:41.638 CC lib/nvmf/auth.o 00:07:41.638 CC lib/vhost/rte_vhost_user.o 00:07:41.638 CC lib/ftl/ftl_reloc.o 00:07:41.638 CC lib/ftl/ftl_l2p_cache.o 00:07:41.896 CC lib/ftl/ftl_p2l.o 00:07:42.154 CC lib/iscsi/param.o 00:07:42.154 CC lib/ftl/ftl_p2l_log.o 00:07:42.412 CC lib/ftl/mngt/ftl_mngt.o 00:07:42.412 CC lib/iscsi/portal_grp.o 00:07:42.412 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:42.412 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:42.412 CC lib/iscsi/tgt_node.o 00:07:42.670 CC lib/iscsi/iscsi_subsystem.o 00:07:42.670 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:42.670 CC lib/iscsi/iscsi_rpc.o 00:07:42.670 CC lib/iscsi/task.o 00:07:42.670 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:42.670 LIB libspdk_vhost.a 00:07:42.930 SO libspdk_vhost.so.8.0 00:07:42.930 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:42.930 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:42.930 SYMLINK libspdk_vhost.so 00:07:42.930 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:43.189 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:43.189 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:43.189 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:43.189 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:43.189 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:43.189 CC lib/ftl/utils/ftl_conf.o 00:07:43.189 CC lib/ftl/utils/ftl_md.o 00:07:43.189 CC lib/ftl/utils/ftl_mempool.o 00:07:43.189 CC lib/ftl/utils/ftl_bitmap.o 00:07:43.453 CC lib/ftl/utils/ftl_property.o 00:07:43.453 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:43.453 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:43.453 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:43.453 LIB libspdk_iscsi.a 00:07:43.453 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:43.453 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:43.453 SO libspdk_iscsi.so.8.0 00:07:43.718 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:43.718 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:43.718 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:43.718 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:43.718 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:43.718 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:43.718 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:43.718 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:43.718 SYMLINK libspdk_iscsi.so 00:07:43.718 CC lib/ftl/base/ftl_base_dev.o 00:07:43.976 CC lib/ftl/base/ftl_base_bdev.o 00:07:43.976 CC lib/ftl/ftl_trace.o 00:07:44.234 LIB libspdk_ftl.a 00:07:44.234 LIB libspdk_nvmf.a 00:07:44.494 SO libspdk_ftl.so.9.0 00:07:44.494 SO libspdk_nvmf.so.20.0 00:07:44.753 SYMLINK libspdk_ftl.so 00:07:44.753 SYMLINK libspdk_nvmf.so 00:07:45.012 CC module/env_dpdk/env_dpdk_rpc.o 00:07:45.270 CC module/accel/iaa/accel_iaa.o 00:07:45.270 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:45.270 CC module/accel/error/accel_error.o 00:07:45.270 CC module/accel/dsa/accel_dsa.o 00:07:45.270 CC module/fsdev/aio/fsdev_aio.o 00:07:45.270 CC module/accel/ioat/accel_ioat.o 00:07:45.270 CC module/sock/posix/posix.o 00:07:45.270 CC module/blob/bdev/blob_bdev.o 00:07:45.270 CC module/keyring/file/keyring.o 00:07:45.270 LIB libspdk_env_dpdk_rpc.a 00:07:45.270 SO libspdk_env_dpdk_rpc.so.6.0 00:07:45.270 SYMLINK libspdk_env_dpdk_rpc.so 00:07:45.270 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:45.270 CC module/keyring/file/keyring_rpc.o 00:07:45.528 CC module/accel/ioat/accel_ioat_rpc.o 00:07:45.528 CC module/accel/iaa/accel_iaa_rpc.o 00:07:45.528 CC module/accel/error/accel_error_rpc.o 00:07:45.528 LIB libspdk_scheduler_dynamic.a 00:07:45.528 SO libspdk_scheduler_dynamic.so.4.0 00:07:45.528 LIB libspdk_keyring_file.a 00:07:45.528 LIB libspdk_blob_bdev.a 00:07:45.528 CC module/fsdev/aio/linux_aio_mgr.o 00:07:45.528 LIB libspdk_accel_ioat.a 00:07:45.528 SO libspdk_keyring_file.so.2.0 00:07:45.528 SO libspdk_blob_bdev.so.12.0 00:07:45.528 SYMLINK libspdk_scheduler_dynamic.so 00:07:45.528 CC module/accel/dsa/accel_dsa_rpc.o 00:07:45.528 LIB libspdk_accel_iaa.a 00:07:45.528 LIB libspdk_accel_error.a 00:07:45.528 SO libspdk_accel_ioat.so.6.0 00:07:45.528 SO libspdk_accel_iaa.so.3.0 00:07:45.528 SYMLINK libspdk_blob_bdev.so 00:07:45.528 SYMLINK libspdk_keyring_file.so 00:07:45.786 SO libspdk_accel_error.so.2.0 00:07:45.786 SYMLINK libspdk_accel_ioat.so 00:07:45.786 SYMLINK libspdk_accel_iaa.so 00:07:45.786 LIB libspdk_accel_dsa.a 00:07:45.786 SYMLINK libspdk_accel_error.so 00:07:45.786 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:45.786 SO libspdk_accel_dsa.so.5.0 00:07:45.786 SYMLINK libspdk_accel_dsa.so 00:07:45.786 CC module/keyring/linux/keyring.o 00:07:45.786 CC module/scheduler/gscheduler/gscheduler.o 00:07:46.061 LIB libspdk_scheduler_dpdk_governor.a 00:07:46.061 CC module/bdev/delay/vbdev_delay.o 00:07:46.061 CC module/blobfs/bdev/blobfs_bdev.o 00:07:46.061 CC module/bdev/error/vbdev_error.o 00:07:46.061 CC module/bdev/gpt/gpt.o 00:07:46.061 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:46.061 CC module/bdev/lvol/vbdev_lvol.o 00:07:46.061 CC module/keyring/linux/keyring_rpc.o 00:07:46.061 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:46.061 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:46.061 LIB libspdk_scheduler_gscheduler.a 00:07:46.061 SO libspdk_scheduler_gscheduler.so.4.0 00:07:46.061 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:46.061 LIB libspdk_keyring_linux.a 00:07:46.061 SYMLINK libspdk_scheduler_gscheduler.so 00:07:46.061 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:46.061 CC module/bdev/gpt/vbdev_gpt.o 00:07:46.061 LIB libspdk_sock_posix.a 00:07:46.061 SO libspdk_keyring_linux.so.1.0 00:07:46.322 LIB libspdk_fsdev_aio.a 00:07:46.322 SO libspdk_sock_posix.so.6.0 00:07:46.322 SO libspdk_fsdev_aio.so.1.0 00:07:46.322 SYMLINK libspdk_keyring_linux.so 00:07:46.322 CC module/bdev/error/vbdev_error_rpc.o 00:07:46.322 SYMLINK libspdk_sock_posix.so 00:07:46.322 SYMLINK libspdk_fsdev_aio.so 00:07:46.322 LIB libspdk_blobfs_bdev.a 00:07:46.322 SO libspdk_blobfs_bdev.so.6.0 00:07:46.322 LIB libspdk_bdev_delay.a 00:07:46.322 SO libspdk_bdev_delay.so.6.0 00:07:46.322 LIB libspdk_bdev_error.a 00:07:46.322 SYMLINK libspdk_blobfs_bdev.so 00:07:46.580 CC module/bdev/malloc/bdev_malloc.o 00:07:46.580 CC module/bdev/null/bdev_null.o 00:07:46.581 SO libspdk_bdev_error.so.6.0 00:07:46.581 CC module/bdev/nvme/bdev_nvme.o 00:07:46.581 LIB libspdk_bdev_gpt.a 00:07:46.581 SYMLINK libspdk_bdev_delay.so 00:07:46.581 CC module/bdev/passthru/vbdev_passthru.o 00:07:46.581 CC module/bdev/null/bdev_null_rpc.o 00:07:46.581 SO libspdk_bdev_gpt.so.6.0 00:07:46.581 SYMLINK libspdk_bdev_error.so 00:07:46.581 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:46.581 SYMLINK libspdk_bdev_gpt.so 00:07:46.581 CC module/bdev/raid/bdev_raid.o 00:07:46.581 CC module/bdev/split/vbdev_split.o 00:07:46.581 LIB libspdk_bdev_lvol.a 00:07:46.581 SO libspdk_bdev_lvol.so.6.0 00:07:46.839 CC module/bdev/split/vbdev_split_rpc.o 00:07:46.839 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:46.839 SYMLINK libspdk_bdev_lvol.so 00:07:46.839 LIB libspdk_bdev_null.a 00:07:46.839 SO libspdk_bdev_null.so.6.0 00:07:46.839 LIB libspdk_bdev_passthru.a 00:07:46.839 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:46.839 SYMLINK libspdk_bdev_null.so 00:07:46.839 SO libspdk_bdev_passthru.so.6.0 00:07:46.839 LIB libspdk_bdev_split.a 00:07:46.839 CC module/bdev/xnvme/bdev_xnvme.o 00:07:46.839 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:46.839 CC module/bdev/aio/bdev_aio.o 00:07:47.097 SO libspdk_bdev_split.so.6.0 00:07:47.097 SYMLINK libspdk_bdev_passthru.so 00:07:47.097 SYMLINK libspdk_bdev_split.so 00:07:47.097 CC module/bdev/nvme/nvme_rpc.o 00:07:47.097 CC module/bdev/ftl/bdev_ftl.o 00:07:47.097 LIB libspdk_bdev_malloc.a 00:07:47.097 SO libspdk_bdev_malloc.so.6.0 00:07:47.097 CC module/bdev/iscsi/bdev_iscsi.o 00:07:47.097 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:47.355 SYMLINK libspdk_bdev_malloc.so 00:07:47.355 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:47.355 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:07:47.355 CC module/bdev/aio/bdev_aio_rpc.o 00:07:47.355 LIB libspdk_bdev_zone_block.a 00:07:47.355 SO libspdk_bdev_zone_block.so.6.0 00:07:47.355 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:47.355 CC module/bdev/nvme/bdev_mdns_client.o 00:07:47.355 LIB libspdk_bdev_xnvme.a 00:07:47.355 SYMLINK libspdk_bdev_zone_block.so 00:07:47.355 CC module/bdev/nvme/vbdev_opal.o 00:07:47.614 SO libspdk_bdev_xnvme.so.3.0 00:07:47.614 LIB libspdk_bdev_aio.a 00:07:47.614 SO libspdk_bdev_aio.so.6.0 00:07:47.614 SYMLINK libspdk_bdev_xnvme.so 00:07:47.614 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:47.614 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:47.614 LIB libspdk_bdev_iscsi.a 00:07:47.614 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:47.614 SO libspdk_bdev_iscsi.so.6.0 00:07:47.614 SYMLINK libspdk_bdev_aio.so 00:07:47.614 CC module/bdev/raid/bdev_raid_rpc.o 00:07:47.614 CC module/bdev/raid/bdev_raid_sb.o 00:07:47.614 LIB libspdk_bdev_ftl.a 00:07:47.874 SO libspdk_bdev_ftl.so.6.0 00:07:47.874 SYMLINK libspdk_bdev_iscsi.so 00:07:47.874 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:47.874 CC module/bdev/raid/raid0.o 00:07:47.874 SYMLINK libspdk_bdev_ftl.so 00:07:47.874 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:47.874 CC module/bdev/raid/raid1.o 00:07:47.874 CC module/bdev/raid/concat.o 00:07:48.131 LIB libspdk_bdev_raid.a 00:07:48.409 SO libspdk_bdev_raid.so.6.0 00:07:48.409 LIB libspdk_bdev_virtio.a 00:07:48.409 SO libspdk_bdev_virtio.so.6.0 00:07:48.409 SYMLINK libspdk_bdev_raid.so 00:07:48.409 SYMLINK libspdk_bdev_virtio.so 00:07:49.786 LIB libspdk_bdev_nvme.a 00:07:49.786 SO libspdk_bdev_nvme.so.7.1 00:07:49.786 SYMLINK libspdk_bdev_nvme.so 00:07:50.723 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:50.723 CC module/event/subsystems/sock/sock.o 00:07:50.723 CC module/event/subsystems/keyring/keyring.o 00:07:50.723 CC module/event/subsystems/vmd/vmd.o 00:07:50.723 CC module/event/subsystems/iobuf/iobuf.o 00:07:50.723 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:50.723 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:50.723 CC module/event/subsystems/fsdev/fsdev.o 00:07:50.723 CC module/event/subsystems/scheduler/scheduler.o 00:07:50.723 LIB libspdk_event_keyring.a 00:07:50.723 LIB libspdk_event_sock.a 00:07:50.723 LIB libspdk_event_fsdev.a 00:07:50.723 SO libspdk_event_keyring.so.1.0 00:07:50.723 LIB libspdk_event_vmd.a 00:07:50.723 SO libspdk_event_fsdev.so.1.0 00:07:50.723 LIB libspdk_event_scheduler.a 00:07:50.723 SO libspdk_event_sock.so.5.0 00:07:50.723 LIB libspdk_event_vhost_blk.a 00:07:50.723 LIB libspdk_event_iobuf.a 00:07:50.723 SO libspdk_event_vmd.so.6.0 00:07:50.723 SO libspdk_event_scheduler.so.4.0 00:07:50.723 SO libspdk_event_vhost_blk.so.3.0 00:07:50.723 SYMLINK libspdk_event_keyring.so 00:07:50.723 SO libspdk_event_iobuf.so.3.0 00:07:50.723 SYMLINK libspdk_event_sock.so 00:07:50.723 SYMLINK libspdk_event_fsdev.so 00:07:50.723 SYMLINK libspdk_event_vmd.so 00:07:50.723 SYMLINK libspdk_event_scheduler.so 00:07:50.723 SYMLINK libspdk_event_vhost_blk.so 00:07:50.723 SYMLINK libspdk_event_iobuf.so 00:07:51.293 CC module/event/subsystems/accel/accel.o 00:07:51.293 LIB libspdk_event_accel.a 00:07:51.293 SO libspdk_event_accel.so.6.0 00:07:51.552 SYMLINK libspdk_event_accel.so 00:07:51.812 CC module/event/subsystems/bdev/bdev.o 00:07:52.071 LIB libspdk_event_bdev.a 00:07:52.071 SO libspdk_event_bdev.so.6.0 00:07:52.071 SYMLINK libspdk_event_bdev.so 00:07:52.331 CC module/event/subsystems/ublk/ublk.o 00:07:52.331 CC module/event/subsystems/nbd/nbd.o 00:07:52.331 CC module/event/subsystems/scsi/scsi.o 00:07:52.331 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:52.331 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:52.591 LIB libspdk_event_nbd.a 00:07:52.591 LIB libspdk_event_ublk.a 00:07:52.591 LIB libspdk_event_scsi.a 00:07:52.591 SO libspdk_event_nbd.so.6.0 00:07:52.591 SO libspdk_event_ublk.so.3.0 00:07:52.591 SO libspdk_event_scsi.so.6.0 00:07:52.591 SYMLINK libspdk_event_nbd.so 00:07:52.591 SYMLINK libspdk_event_ublk.so 00:07:52.591 SYMLINK libspdk_event_scsi.so 00:07:52.591 LIB libspdk_event_nvmf.a 00:07:52.591 SO libspdk_event_nvmf.so.6.0 00:07:52.850 SYMLINK libspdk_event_nvmf.so 00:07:52.850 CC module/event/subsystems/iscsi/iscsi.o 00:07:52.850 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:53.171 LIB libspdk_event_vhost_scsi.a 00:07:53.171 LIB libspdk_event_iscsi.a 00:07:53.171 SO libspdk_event_vhost_scsi.so.3.0 00:07:53.171 SO libspdk_event_iscsi.so.6.0 00:07:53.171 SYMLINK libspdk_event_iscsi.so 00:07:53.171 SYMLINK libspdk_event_vhost_scsi.so 00:07:53.431 SO libspdk.so.6.0 00:07:53.431 SYMLINK libspdk.so 00:07:53.689 CC app/spdk_lspci/spdk_lspci.o 00:07:53.689 CC app/spdk_nvme_perf/perf.o 00:07:53.689 CXX app/trace/trace.o 00:07:53.689 CC app/spdk_nvme_identify/identify.o 00:07:53.689 CC app/trace_record/trace_record.o 00:07:53.689 CC app/iscsi_tgt/iscsi_tgt.o 00:07:53.689 CC app/spdk_tgt/spdk_tgt.o 00:07:53.689 CC app/nvmf_tgt/nvmf_main.o 00:07:53.949 CC examples/util/zipf/zipf.o 00:07:53.949 CC test/thread/poller_perf/poller_perf.o 00:07:53.949 LINK spdk_lspci 00:07:53.949 LINK iscsi_tgt 00:07:53.949 LINK zipf 00:07:53.949 LINK spdk_tgt 00:07:53.949 LINK nvmf_tgt 00:07:54.207 LINK poller_perf 00:07:54.207 LINK spdk_trace_record 00:07:54.465 CC examples/ioat/perf/perf.o 00:07:54.466 CC app/spdk_nvme_discover/discovery_aer.o 00:07:54.466 CC app/spdk_top/spdk_top.o 00:07:54.466 LINK spdk_trace 00:07:54.466 CC examples/vmd/lsvmd/lsvmd.o 00:07:54.466 CC test/dma/test_dma/test_dma.o 00:07:54.725 LINK ioat_perf 00:07:54.725 CC test/app/bdev_svc/bdev_svc.o 00:07:54.725 CC examples/idxd/perf/perf.o 00:07:54.725 LINK spdk_nvme_discover 00:07:54.725 LINK lsvmd 00:07:54.725 LINK spdk_nvme_perf 00:07:54.725 LINK spdk_nvme_identify 00:07:54.725 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:54.984 LINK bdev_svc 00:07:54.984 CC examples/ioat/verify/verify.o 00:07:54.984 CC examples/vmd/led/led.o 00:07:54.984 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:54.984 LINK idxd_perf 00:07:55.243 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:55.243 CC app/spdk_dd/spdk_dd.o 00:07:55.243 LINK led 00:07:55.243 LINK test_dma 00:07:55.243 LINK verify 00:07:55.243 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:55.243 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:55.503 LINK interrupt_tgt 00:07:55.503 TEST_HEADER include/spdk/accel.h 00:07:55.503 TEST_HEADER include/spdk/accel_module.h 00:07:55.503 TEST_HEADER include/spdk/assert.h 00:07:55.503 TEST_HEADER include/spdk/barrier.h 00:07:55.503 TEST_HEADER include/spdk/base64.h 00:07:55.503 TEST_HEADER include/spdk/bdev.h 00:07:55.503 TEST_HEADER include/spdk/bdev_module.h 00:07:55.503 TEST_HEADER include/spdk/bdev_zone.h 00:07:55.503 TEST_HEADER include/spdk/bit_array.h 00:07:55.503 TEST_HEADER include/spdk/bit_pool.h 00:07:55.503 TEST_HEADER include/spdk/blob_bdev.h 00:07:55.503 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:55.503 TEST_HEADER include/spdk/blobfs.h 00:07:55.503 TEST_HEADER include/spdk/blob.h 00:07:55.503 TEST_HEADER include/spdk/conf.h 00:07:55.503 TEST_HEADER include/spdk/config.h 00:07:55.503 TEST_HEADER include/spdk/cpuset.h 00:07:55.503 TEST_HEADER include/spdk/crc16.h 00:07:55.503 CC examples/thread/thread/thread_ex.o 00:07:55.503 TEST_HEADER include/spdk/crc32.h 00:07:55.503 TEST_HEADER include/spdk/crc64.h 00:07:55.503 TEST_HEADER include/spdk/dif.h 00:07:55.503 TEST_HEADER include/spdk/dma.h 00:07:55.503 TEST_HEADER include/spdk/endian.h 00:07:55.503 TEST_HEADER include/spdk/env_dpdk.h 00:07:55.503 TEST_HEADER include/spdk/env.h 00:07:55.503 TEST_HEADER include/spdk/event.h 00:07:55.503 TEST_HEADER include/spdk/fd_group.h 00:07:55.503 TEST_HEADER include/spdk/fd.h 00:07:55.503 TEST_HEADER include/spdk/file.h 00:07:55.503 TEST_HEADER include/spdk/fsdev.h 00:07:55.503 TEST_HEADER include/spdk/fsdev_module.h 00:07:55.503 TEST_HEADER include/spdk/ftl.h 00:07:55.503 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:55.503 CC examples/sock/hello_world/hello_sock.o 00:07:55.503 TEST_HEADER include/spdk/gpt_spec.h 00:07:55.503 TEST_HEADER include/spdk/hexlify.h 00:07:55.503 TEST_HEADER include/spdk/histogram_data.h 00:07:55.503 TEST_HEADER include/spdk/idxd.h 00:07:55.503 TEST_HEADER include/spdk/idxd_spec.h 00:07:55.503 TEST_HEADER include/spdk/init.h 00:07:55.503 TEST_HEADER include/spdk/ioat.h 00:07:55.503 TEST_HEADER include/spdk/ioat_spec.h 00:07:55.503 TEST_HEADER include/spdk/iscsi_spec.h 00:07:55.503 TEST_HEADER include/spdk/json.h 00:07:55.503 TEST_HEADER include/spdk/jsonrpc.h 00:07:55.503 TEST_HEADER include/spdk/keyring.h 00:07:55.503 TEST_HEADER include/spdk/keyring_module.h 00:07:55.503 TEST_HEADER include/spdk/likely.h 00:07:55.503 TEST_HEADER include/spdk/log.h 00:07:55.503 TEST_HEADER include/spdk/lvol.h 00:07:55.503 TEST_HEADER include/spdk/md5.h 00:07:55.503 TEST_HEADER include/spdk/memory.h 00:07:55.503 TEST_HEADER include/spdk/mmio.h 00:07:55.503 TEST_HEADER include/spdk/nbd.h 00:07:55.503 TEST_HEADER include/spdk/net.h 00:07:55.503 TEST_HEADER include/spdk/notify.h 00:07:55.503 TEST_HEADER include/spdk/nvme.h 00:07:55.503 TEST_HEADER include/spdk/nvme_intel.h 00:07:55.503 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:55.503 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:55.503 TEST_HEADER include/spdk/nvme_spec.h 00:07:55.503 TEST_HEADER include/spdk/nvme_zns.h 00:07:55.503 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:55.503 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:55.503 TEST_HEADER include/spdk/nvmf.h 00:07:55.503 TEST_HEADER include/spdk/nvmf_spec.h 00:07:55.503 TEST_HEADER include/spdk/nvmf_transport.h 00:07:55.503 TEST_HEADER include/spdk/opal.h 00:07:55.503 TEST_HEADER include/spdk/opal_spec.h 00:07:55.503 TEST_HEADER include/spdk/pci_ids.h 00:07:55.503 TEST_HEADER include/spdk/pipe.h 00:07:55.503 LINK spdk_top 00:07:55.503 TEST_HEADER include/spdk/queue.h 00:07:55.503 TEST_HEADER include/spdk/reduce.h 00:07:55.503 TEST_HEADER include/spdk/rpc.h 00:07:55.503 LINK spdk_dd 00:07:55.503 TEST_HEADER include/spdk/scheduler.h 00:07:55.503 TEST_HEADER include/spdk/scsi.h 00:07:55.503 TEST_HEADER include/spdk/scsi_spec.h 00:07:55.503 TEST_HEADER include/spdk/sock.h 00:07:55.503 TEST_HEADER include/spdk/stdinc.h 00:07:55.503 TEST_HEADER include/spdk/string.h 00:07:55.503 TEST_HEADER include/spdk/thread.h 00:07:55.503 TEST_HEADER include/spdk/trace.h 00:07:55.503 TEST_HEADER include/spdk/trace_parser.h 00:07:55.503 TEST_HEADER include/spdk/tree.h 00:07:55.762 TEST_HEADER include/spdk/ublk.h 00:07:55.762 TEST_HEADER include/spdk/util.h 00:07:55.762 LINK nvme_fuzz 00:07:55.762 TEST_HEADER include/spdk/uuid.h 00:07:55.762 TEST_HEADER include/spdk/version.h 00:07:55.762 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:55.762 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:55.762 TEST_HEADER include/spdk/vhost.h 00:07:55.762 TEST_HEADER include/spdk/vmd.h 00:07:55.762 TEST_HEADER include/spdk/xor.h 00:07:55.762 TEST_HEADER include/spdk/zipf.h 00:07:55.762 CXX test/cpp_headers/accel.o 00:07:55.762 CC test/env/mem_callbacks/mem_callbacks.o 00:07:55.762 CC test/env/vtophys/vtophys.o 00:07:55.762 LINK thread 00:07:55.762 CXX test/cpp_headers/accel_module.o 00:07:55.762 LINK vhost_fuzz 00:07:55.762 LINK hello_sock 00:07:56.022 LINK vtophys 00:07:56.022 CXX test/cpp_headers/assert.o 00:07:56.022 CC test/app/histogram_perf/histogram_perf.o 00:07:56.022 CC app/vhost/vhost.o 00:07:56.022 CC app/fio/nvme/fio_plugin.o 00:07:56.281 CXX test/cpp_headers/barrier.o 00:07:56.281 LINK histogram_perf 00:07:56.281 CC app/fio/bdev/fio_plugin.o 00:07:56.281 LINK vhost 00:07:56.281 CC examples/accel/perf/accel_perf.o 00:07:56.281 CC test/event/event_perf/event_perf.o 00:07:56.281 LINK mem_callbacks 00:07:56.281 CC test/nvme/aer/aer.o 00:07:56.281 CXX test/cpp_headers/base64.o 00:07:56.281 CXX test/cpp_headers/bdev.o 00:07:56.540 LINK event_perf 00:07:56.540 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:56.540 CXX test/cpp_headers/bdev_module.o 00:07:56.540 CC test/app/stub/stub.o 00:07:56.540 CC test/app/jsoncat/jsoncat.o 00:07:56.800 LINK aer 00:07:56.800 CC test/event/reactor/reactor.o 00:07:56.800 LINK spdk_nvme 00:07:56.800 LINK env_dpdk_post_init 00:07:56.800 LINK jsoncat 00:07:56.800 CXX test/cpp_headers/bdev_zone.o 00:07:56.800 LINK spdk_bdev 00:07:56.800 LINK stub 00:07:56.800 LINK reactor 00:07:56.800 LINK accel_perf 00:07:57.059 CC test/nvme/reset/reset.o 00:07:57.059 CXX test/cpp_headers/bit_array.o 00:07:57.059 CC test/env/memory/memory_ut.o 00:07:57.059 CC examples/blob/hello_world/hello_blob.o 00:07:57.059 CC examples/blob/cli/blobcli.o 00:07:57.059 CC test/event/reactor_perf/reactor_perf.o 00:07:57.059 CC examples/nvme/hello_world/hello_world.o 00:07:57.059 CC test/event/app_repeat/app_repeat.o 00:07:57.387 CXX test/cpp_headers/bit_pool.o 00:07:57.387 LINK reset 00:07:57.387 CC test/event/scheduler/scheduler.o 00:07:57.387 LINK reactor_perf 00:07:57.387 LINK hello_blob 00:07:57.387 LINK iscsi_fuzz 00:07:57.387 CXX test/cpp_headers/blob_bdev.o 00:07:57.387 LINK app_repeat 00:07:57.387 LINK hello_world 00:07:57.387 CXX test/cpp_headers/blobfs_bdev.o 00:07:57.659 CC test/nvme/sgl/sgl.o 00:07:57.659 LINK scheduler 00:07:57.659 CC test/rpc_client/rpc_client_test.o 00:07:57.659 LINK blobcli 00:07:57.659 CC test/env/pci/pci_ut.o 00:07:57.659 CC examples/nvme/reconnect/reconnect.o 00:07:57.659 CXX test/cpp_headers/blobfs.o 00:07:57.659 CC test/accel/dif/dif.o 00:07:57.918 CXX test/cpp_headers/blob.o 00:07:57.918 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:57.918 LINK sgl 00:07:57.918 LINK rpc_client_test 00:07:57.918 CXX test/cpp_headers/conf.o 00:07:57.918 CXX test/cpp_headers/config.o 00:07:58.179 CXX test/cpp_headers/cpuset.o 00:07:58.179 LINK reconnect 00:07:58.179 LINK hello_fsdev 00:07:58.179 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:58.179 CC test/nvme/e2edp/nvme_dp.o 00:07:58.179 CC test/blobfs/mkfs/mkfs.o 00:07:58.179 LINK pci_ut 00:07:58.179 CXX test/cpp_headers/crc16.o 00:07:58.179 CC test/lvol/esnap/esnap.o 00:07:58.179 CXX test/cpp_headers/crc32.o 00:07:58.438 CXX test/cpp_headers/crc64.o 00:07:58.438 LINK mkfs 00:07:58.438 LINK memory_ut 00:07:58.438 CXX test/cpp_headers/dif.o 00:07:58.438 CXX test/cpp_headers/dma.o 00:07:58.438 LINK nvme_dp 00:07:58.438 CC test/nvme/overhead/overhead.o 00:07:58.698 LINK dif 00:07:58.698 CXX test/cpp_headers/endian.o 00:07:58.698 CC examples/nvme/arbitration/arbitration.o 00:07:58.698 CC examples/bdev/hello_world/hello_bdev.o 00:07:58.698 CC examples/nvme/hotplug/hotplug.o 00:07:58.698 CC examples/bdev/bdevperf/bdevperf.o 00:07:58.698 CC test/nvme/err_injection/err_injection.o 00:07:58.698 LINK nvme_manage 00:07:58.698 LINK overhead 00:07:58.698 CXX test/cpp_headers/env_dpdk.o 00:07:58.957 CXX test/cpp_headers/env.o 00:07:58.957 LINK err_injection 00:07:58.957 LINK hello_bdev 00:07:58.957 LINK hotplug 00:07:58.957 CXX test/cpp_headers/event.o 00:07:58.957 CXX test/cpp_headers/fd_group.o 00:07:58.957 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:59.217 LINK arbitration 00:07:59.217 CC test/nvme/startup/startup.o 00:07:59.217 CXX test/cpp_headers/fd.o 00:07:59.217 CXX test/cpp_headers/file.o 00:07:59.217 CXX test/cpp_headers/fsdev.o 00:07:59.217 LINK cmb_copy 00:07:59.217 CC test/nvme/reserve/reserve.o 00:07:59.217 LINK startup 00:07:59.217 CXX test/cpp_headers/fsdev_module.o 00:07:59.477 CC examples/nvme/abort/abort.o 00:07:59.477 CXX test/cpp_headers/ftl.o 00:07:59.477 CXX test/cpp_headers/fuse_dispatcher.o 00:07:59.477 CC test/bdev/bdevio/bdevio.o 00:07:59.477 LINK reserve 00:07:59.477 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:59.477 CXX test/cpp_headers/gpt_spec.o 00:07:59.477 CC test/nvme/simple_copy/simple_copy.o 00:07:59.477 CC test/nvme/connect_stress/connect_stress.o 00:07:59.736 CC test/nvme/boot_partition/boot_partition.o 00:07:59.736 LINK pmr_persistence 00:07:59.736 CXX test/cpp_headers/hexlify.o 00:07:59.736 LINK abort 00:07:59.736 CC test/nvme/compliance/nvme_compliance.o 00:07:59.736 LINK connect_stress 00:07:59.736 LINK bdevperf 00:07:59.736 LINK simple_copy 00:07:59.996 LINK boot_partition 00:07:59.996 LINK bdevio 00:07:59.996 CXX test/cpp_headers/histogram_data.o 00:07:59.996 CXX test/cpp_headers/idxd.o 00:07:59.996 CC test/nvme/fused_ordering/fused_ordering.o 00:07:59.996 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:59.996 CC test/nvme/fdp/fdp.o 00:08:00.255 CC test/nvme/cuse/cuse.o 00:08:00.255 CXX test/cpp_headers/idxd_spec.o 00:08:00.255 CXX test/cpp_headers/init.o 00:08:00.255 LINK nvme_compliance 00:08:00.255 CXX test/cpp_headers/ioat.o 00:08:00.255 LINK fused_ordering 00:08:00.255 LINK doorbell_aers 00:08:00.255 CXX test/cpp_headers/ioat_spec.o 00:08:00.255 CC examples/nvmf/nvmf/nvmf.o 00:08:00.255 CXX test/cpp_headers/iscsi_spec.o 00:08:00.514 CXX test/cpp_headers/json.o 00:08:00.514 CXX test/cpp_headers/jsonrpc.o 00:08:00.514 CXX test/cpp_headers/keyring.o 00:08:00.515 CXX test/cpp_headers/keyring_module.o 00:08:00.515 CXX test/cpp_headers/likely.o 00:08:00.515 LINK fdp 00:08:00.515 CXX test/cpp_headers/log.o 00:08:00.515 CXX test/cpp_headers/lvol.o 00:08:00.515 CXX test/cpp_headers/md5.o 00:08:00.515 CXX test/cpp_headers/memory.o 00:08:00.515 CXX test/cpp_headers/mmio.o 00:08:00.774 CXX test/cpp_headers/nbd.o 00:08:00.774 LINK nvmf 00:08:00.774 CXX test/cpp_headers/net.o 00:08:00.774 CXX test/cpp_headers/notify.o 00:08:00.774 CXX test/cpp_headers/nvme.o 00:08:00.774 CXX test/cpp_headers/nvme_intel.o 00:08:00.774 CXX test/cpp_headers/nvme_ocssd.o 00:08:00.774 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:00.774 CXX test/cpp_headers/nvme_spec.o 00:08:00.774 CXX test/cpp_headers/nvme_zns.o 00:08:00.774 CXX test/cpp_headers/nvmf_cmd.o 00:08:00.774 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:00.774 CXX test/cpp_headers/nvmf.o 00:08:00.774 CXX test/cpp_headers/nvmf_spec.o 00:08:01.032 CXX test/cpp_headers/nvmf_transport.o 00:08:01.032 CXX test/cpp_headers/opal.o 00:08:01.032 CXX test/cpp_headers/opal_spec.o 00:08:01.032 CXX test/cpp_headers/pci_ids.o 00:08:01.032 CXX test/cpp_headers/pipe.o 00:08:01.032 CXX test/cpp_headers/queue.o 00:08:01.032 CXX test/cpp_headers/reduce.o 00:08:01.032 CXX test/cpp_headers/rpc.o 00:08:01.032 CXX test/cpp_headers/scheduler.o 00:08:01.032 CXX test/cpp_headers/scsi.o 00:08:01.032 CXX test/cpp_headers/scsi_spec.o 00:08:01.032 CXX test/cpp_headers/sock.o 00:08:01.291 CXX test/cpp_headers/stdinc.o 00:08:01.291 CXX test/cpp_headers/string.o 00:08:01.291 CXX test/cpp_headers/thread.o 00:08:01.291 CXX test/cpp_headers/trace.o 00:08:01.291 CXX test/cpp_headers/trace_parser.o 00:08:01.291 CXX test/cpp_headers/tree.o 00:08:01.291 CXX test/cpp_headers/ublk.o 00:08:01.291 CXX test/cpp_headers/util.o 00:08:01.291 CXX test/cpp_headers/uuid.o 00:08:01.291 CXX test/cpp_headers/version.o 00:08:01.291 CXX test/cpp_headers/vfio_user_pci.o 00:08:01.291 CXX test/cpp_headers/vfio_user_spec.o 00:08:01.291 CXX test/cpp_headers/vhost.o 00:08:01.291 CXX test/cpp_headers/vmd.o 00:08:01.549 CXX test/cpp_headers/xor.o 00:08:01.549 CXX test/cpp_headers/zipf.o 00:08:01.549 LINK cuse 00:08:04.841 LINK esnap 00:08:04.841 ************************************ 00:08:04.841 END TEST make 00:08:04.841 ************************************ 00:08:04.841 00:08:04.841 real 1m32.475s 00:08:04.841 user 8m31.199s 00:08:04.841 sys 1m44.469s 00:08:04.841 18:11:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:04.841 18:11:58 make -- common/autotest_common.sh@10 -- $ set +x 00:08:04.841 18:11:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:04.841 18:11:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:04.841 18:11:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:04.841 18:11:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.841 18:11:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:04.841 18:11:58 -- pm/common@44 -- $ pid=5496 00:08:04.841 18:11:58 -- pm/common@50 -- $ kill -TERM 5496 00:08:04.841 18:11:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:04.841 18:11:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:04.841 18:11:58 -- pm/common@44 -- $ pid=5498 00:08:04.841 18:11:58 -- pm/common@50 -- $ kill -TERM 5498 00:08:04.841 18:11:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:04.841 18:11:58 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:05.100 18:11:58 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.100 18:11:58 -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.100 18:11:58 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.100 18:11:58 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.100 18:11:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.100 18:11:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.100 18:11:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.100 18:11:58 -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.100 18:11:58 -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.100 18:11:58 -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.100 18:11:58 -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.100 18:11:58 -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.100 18:11:58 -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.100 18:11:58 -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.100 18:11:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.100 18:11:58 -- scripts/common.sh@344 -- # case "$op" in 00:08:05.100 18:11:58 -- scripts/common.sh@345 -- # : 1 00:08:05.101 18:11:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.101 18:11:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.101 18:11:58 -- scripts/common.sh@365 -- # decimal 1 00:08:05.101 18:11:58 -- scripts/common.sh@353 -- # local d=1 00:08:05.101 18:11:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.101 18:11:58 -- scripts/common.sh@355 -- # echo 1 00:08:05.101 18:11:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.101 18:11:58 -- scripts/common.sh@366 -- # decimal 2 00:08:05.101 18:11:58 -- scripts/common.sh@353 -- # local d=2 00:08:05.101 18:11:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.101 18:11:58 -- scripts/common.sh@355 -- # echo 2 00:08:05.101 18:11:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.101 18:11:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.101 18:11:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.101 18:11:58 -- scripts/common.sh@368 -- # return 0 00:08:05.101 18:11:58 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.101 18:11:58 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.101 --rc genhtml_branch_coverage=1 00:08:05.101 --rc genhtml_function_coverage=1 00:08:05.101 --rc genhtml_legend=1 00:08:05.101 --rc geninfo_all_blocks=1 00:08:05.101 --rc geninfo_unexecuted_blocks=1 00:08:05.101 00:08:05.101 ' 00:08:05.101 18:11:58 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.101 --rc genhtml_branch_coverage=1 00:08:05.101 --rc genhtml_function_coverage=1 00:08:05.101 --rc genhtml_legend=1 00:08:05.101 --rc geninfo_all_blocks=1 00:08:05.101 --rc geninfo_unexecuted_blocks=1 00:08:05.101 00:08:05.101 ' 00:08:05.101 18:11:58 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.101 --rc genhtml_branch_coverage=1 00:08:05.101 --rc genhtml_function_coverage=1 00:08:05.101 --rc genhtml_legend=1 00:08:05.101 --rc geninfo_all_blocks=1 00:08:05.101 --rc geninfo_unexecuted_blocks=1 00:08:05.101 00:08:05.101 ' 00:08:05.101 18:11:58 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.101 --rc genhtml_branch_coverage=1 00:08:05.101 --rc genhtml_function_coverage=1 00:08:05.101 --rc genhtml_legend=1 00:08:05.101 --rc geninfo_all_blocks=1 00:08:05.101 --rc geninfo_unexecuted_blocks=1 00:08:05.101 00:08:05.101 ' 00:08:05.101 18:11:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.101 18:11:58 -- nvmf/common.sh@7 -- # uname -s 00:08:05.101 18:11:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.101 18:11:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.101 18:11:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.101 18:11:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.101 18:11:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.101 18:11:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.101 18:11:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.101 18:11:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.101 18:11:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.101 18:11:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.101 18:11:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:57a1b531-b449-4b85-a403-bab0c2dbdf9d 00:08:05.101 18:11:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=57a1b531-b449-4b85-a403-bab0c2dbdf9d 00:08:05.101 18:11:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.101 18:11:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.101 18:11:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:05.101 18:11:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.101 18:11:58 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.101 18:11:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.101 18:11:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.101 18:11:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.101 18:11:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.101 18:11:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.101 18:11:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.101 18:11:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.101 18:11:58 -- paths/export.sh@5 -- # export PATH 00:08:05.101 18:11:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.101 18:11:58 -- nvmf/common.sh@51 -- # : 0 00:08:05.101 18:11:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.101 18:11:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.101 18:11:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.101 18:11:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.101 18:11:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.101 18:11:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.101 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.101 18:11:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.101 18:11:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.101 18:11:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.101 18:11:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:05.101 18:11:58 -- spdk/autotest.sh@32 -- # uname -s 00:08:05.101 18:11:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:05.101 18:11:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:05.101 18:11:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:05.101 18:11:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:05.101 18:11:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:05.101 18:11:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:05.101 18:11:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:05.101 18:11:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:05.101 18:11:58 -- spdk/autotest.sh@48 -- # udevadm_pid=55063 00:08:05.101 18:11:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:05.101 18:11:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:05.101 18:11:58 -- pm/common@17 -- # local monitor 00:08:05.101 18:11:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:05.101 18:11:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:05.101 18:11:58 -- pm/common@25 -- # sleep 1 00:08:05.101 18:11:58 -- pm/common@21 -- # date +%s 00:08:05.360 18:11:58 -- pm/common@21 -- # date +%s 00:08:05.360 18:11:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732644718 00:08:05.360 18:11:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732644718 00:08:05.360 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732644718_collect-cpu-load.pm.log 00:08:05.360 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732644718_collect-vmstat.pm.log 00:08:06.293 18:11:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:06.293 18:11:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:06.293 18:11:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:06.293 18:11:59 -- common/autotest_common.sh@10 -- # set +x 00:08:06.293 18:11:59 -- spdk/autotest.sh@59 -- # create_test_list 00:08:06.293 18:11:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:06.293 18:11:59 -- common/autotest_common.sh@10 -- # set +x 00:08:06.293 18:11:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:06.293 18:11:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:06.293 18:11:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:06.293 18:11:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:06.293 18:11:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:06.293 18:11:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:06.293 18:11:59 -- common/autotest_common.sh@1457 -- # uname 00:08:06.293 18:11:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:06.293 18:11:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:06.293 18:11:59 -- common/autotest_common.sh@1477 -- # uname 00:08:06.293 18:11:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:06.293 18:11:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:06.293 18:11:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:06.293 lcov: LCOV version 1.15 00:08:06.293 18:11:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:24.372 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:24.372 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:39.285 18:12:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:39.285 18:12:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.285 18:12:32 -- common/autotest_common.sh@10 -- # set +x 00:08:39.285 18:12:32 -- spdk/autotest.sh@78 -- # rm -f 00:08:39.285 18:12:32 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:39.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:40.476 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:40.476 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:40.476 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:08:40.476 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:08:40.476 18:12:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:40.476 18:12:33 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:40.476 18:12:33 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:40.476 18:12:33 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:40.476 18:12:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.476 18:12:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:40.476 18:12:33 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:40.476 18:12:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.476 18:12:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:08:40.476 18:12:33 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:08:40.476 18:12:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.476 18:12:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:08:40.476 18:12:33 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:08:40.476 18:12:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.476 18:12:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:08:40.476 18:12:33 -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:08:40.476 18:12:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.476 18:12:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:40.476 18:12:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:40.476 18:12:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.476 18:12:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:08:40.476 18:12:33 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:40.476 18:12:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:40.476 18:12:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:08:40.476 18:12:33 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:08:40.476 18:12:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:40.476 18:12:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:40.476 18:12:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:40.476 18:12:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:40.476 18:12:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:40.476 18:12:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:40.476 18:12:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:40.476 18:12:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:40.476 No valid GPT data, bailing 00:08:40.476 18:12:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:40.476 18:12:33 -- scripts/common.sh@394 -- # pt= 00:08:40.476 18:12:33 -- scripts/common.sh@395 -- # return 1 00:08:40.476 18:12:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:40.476 1+0 records in 00:08:40.476 1+0 records out 00:08:40.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00581059 s, 180 MB/s 00:08:40.476 18:12:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:40.476 18:12:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:40.476 18:12:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:08:40.476 18:12:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:08:40.476 18:12:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:08:40.476 No valid GPT data, bailing 00:08:40.476 18:12:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:08:40.734 18:12:33 -- scripts/common.sh@394 -- # pt= 00:08:40.734 18:12:33 -- scripts/common.sh@395 -- # return 1 00:08:40.734 18:12:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:08:40.734 1+0 records in 00:08:40.734 1+0 records out 00:08:40.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416325 s, 252 MB/s 00:08:40.734 18:12:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:40.734 18:12:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:40.734 18:12:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:08:40.734 18:12:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:08:40.734 18:12:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:08:40.734 No valid GPT data, bailing 00:08:40.735 18:12:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:08:40.735 18:12:33 -- scripts/common.sh@394 -- # pt= 00:08:40.735 18:12:33 -- scripts/common.sh@395 -- # return 1 00:08:40.735 18:12:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:08:40.735 1+0 records in 00:08:40.735 1+0 records out 00:08:40.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00617722 s, 170 MB/s 00:08:40.735 18:12:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:40.735 18:12:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:40.735 18:12:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:40.735 18:12:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:40.735 18:12:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:40.735 No valid GPT data, bailing 00:08:40.735 18:12:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:40.735 18:12:33 -- scripts/common.sh@394 -- # pt= 00:08:40.735 18:12:33 -- scripts/common.sh@395 -- # return 1 00:08:40.735 18:12:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:40.735 1+0 records in 00:08:40.735 1+0 records out 00:08:40.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00676119 s, 155 MB/s 00:08:40.735 18:12:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:40.735 18:12:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:40.735 18:12:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:08:40.735 18:12:33 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:08:40.735 18:12:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:08:40.735 No valid GPT data, bailing 00:08:40.735 18:12:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:08:40.735 18:12:34 -- scripts/common.sh@394 -- # pt= 00:08:40.735 18:12:34 -- scripts/common.sh@395 -- # return 1 00:08:40.735 18:12:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:08:41.009 1+0 records in 00:08:41.009 1+0 records out 00:08:41.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165842 s, 63.2 MB/s 00:08:41.009 18:12:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:41.009 18:12:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:41.009 18:12:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:08:41.009 18:12:34 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:08:41.009 18:12:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:08:41.009 No valid GPT data, bailing 00:08:41.009 18:12:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:08:41.009 18:12:34 -- scripts/common.sh@394 -- # pt= 00:08:41.009 18:12:34 -- scripts/common.sh@395 -- # return 1 00:08:41.009 18:12:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:08:41.009 1+0 records in 00:08:41.009 1+0 records out 00:08:41.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00343198 s, 306 MB/s 00:08:41.009 18:12:34 -- spdk/autotest.sh@105 -- # sync 00:08:41.009 18:12:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:41.009 18:12:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:41.009 18:12:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:44.299 18:12:37 -- spdk/autotest.sh@111 -- # uname -s 00:08:44.299 18:12:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:44.299 18:12:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:44.299 18:12:37 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:44.572 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:44.841 Hugepages 00:08:44.841 node hugesize free / total 00:08:44.841 node0 1048576kB 0 / 0 00:08:45.119 node0 2048kB 0 / 0 00:08:45.119 00:08:45.119 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:45.119 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:45.119 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:08:45.392 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:08:45.392 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:08:45.392 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:45.392 18:12:38 -- spdk/autotest.sh@117 -- # uname -s 00:08:45.392 18:12:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:45.392 18:12:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:45.392 18:12:38 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:45.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:46.913 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.913 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.913 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.913 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:46.913 18:12:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:47.875 18:12:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:47.875 18:12:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:47.875 18:12:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:47.875 18:12:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:47.875 18:12:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:47.875 18:12:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:47.875 18:12:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:47.875 18:12:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:47.875 18:12:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:48.155 18:12:41 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:48.155 18:12:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:48.155 18:12:41 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:48.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:48.671 Waiting for block devices as requested 00:08:48.671 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.671 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.928 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.928 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.233 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:54.233 18:12:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:54.233 18:12:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:54.233 18:12:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:54.233 18:12:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:54.233 18:12:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:54.233 18:12:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:54.233 18:12:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:54.233 18:12:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:54.233 18:12:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1543 -- # continue 00:08:54.233 18:12:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:54.233 18:12:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:54.233 18:12:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:54.233 18:12:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:54.233 18:12:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:54.233 18:12:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:54.233 18:12:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:54.233 18:12:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:54.233 18:12:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1543 -- # continue 00:08:54.233 18:12:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:54.233 18:12:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:54.233 18:12:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:54.233 18:12:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:54.233 18:12:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1543 -- # continue 00:08:54.233 18:12:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:54.233 18:12:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:08:54.233 18:12:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:54.233 18:12:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:08:54.233 18:12:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:08:54.233 18:12:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:08:54.233 18:12:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:54.233 18:12:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:54.233 18:12:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:54.233 18:12:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:54.233 18:12:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:54.233 18:12:47 -- common/autotest_common.sh@1543 -- # continue 00:08:54.233 18:12:47 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:54.233 18:12:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.233 18:12:47 -- common/autotest_common.sh@10 -- # set +x 00:08:54.233 18:12:47 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:54.233 18:12:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.233 18:12:47 -- common/autotest_common.sh@10 -- # set +x 00:08:54.233 18:12:47 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:54.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.727 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.727 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.727 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.727 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.727 18:12:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:55.727 18:12:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:55.727 18:12:48 -- common/autotest_common.sh@10 -- # set +x 00:08:55.727 18:12:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:55.727 18:12:48 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:55.727 18:12:48 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:55.727 18:12:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:55.727 18:12:48 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:55.727 18:12:48 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:55.727 18:12:48 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:55.727 18:12:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:55.727 18:12:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:55.727 18:12:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:55.727 18:12:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:55.727 18:12:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:55.727 18:12:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:55.727 18:12:49 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:55.727 18:12:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:55.727 18:12:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:55.727 18:12:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:55.727 18:12:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:55.727 18:12:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:55.727 18:12:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:55.727 18:12:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:55.727 18:12:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:55.727 18:12:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:55.727 18:12:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:55.727 18:12:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:08:55.727 18:12:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:55.727 18:12:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:55.727 18:12:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:55.727 18:12:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:08:55.727 18:12:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:55.727 18:12:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:55.727 18:12:49 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:55.727 18:12:49 -- common/autotest_common.sh@1572 -- # return 0 00:08:55.727 18:12:49 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:55.727 18:12:49 -- common/autotest_common.sh@1580 -- # return 0 00:08:55.727 18:12:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:55.727 18:12:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:55.727 18:12:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:55.727 18:12:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:55.727 18:12:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:55.727 18:12:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.727 18:12:49 -- common/autotest_common.sh@10 -- # set +x 00:08:55.727 18:12:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:55.727 18:12:49 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:55.727 18:12:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.727 18:12:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.727 18:12:49 -- common/autotest_common.sh@10 -- # set +x 00:08:55.984 ************************************ 00:08:55.984 START TEST env 00:08:55.984 ************************************ 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:55.984 * Looking for test storage... 00:08:55.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.984 18:12:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.984 18:12:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.984 18:12:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.984 18:12:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.984 18:12:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.984 18:12:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.984 18:12:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.984 18:12:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.984 18:12:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.984 18:12:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.984 18:12:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.984 18:12:49 env -- scripts/common.sh@344 -- # case "$op" in 00:08:55.984 18:12:49 env -- scripts/common.sh@345 -- # : 1 00:08:55.984 18:12:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.984 18:12:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.984 18:12:49 env -- scripts/common.sh@365 -- # decimal 1 00:08:55.984 18:12:49 env -- scripts/common.sh@353 -- # local d=1 00:08:55.984 18:12:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.984 18:12:49 env -- scripts/common.sh@355 -- # echo 1 00:08:55.984 18:12:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.984 18:12:49 env -- scripts/common.sh@366 -- # decimal 2 00:08:55.984 18:12:49 env -- scripts/common.sh@353 -- # local d=2 00:08:55.984 18:12:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.984 18:12:49 env -- scripts/common.sh@355 -- # echo 2 00:08:55.984 18:12:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.984 18:12:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.984 18:12:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.984 18:12:49 env -- scripts/common.sh@368 -- # return 0 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.984 --rc genhtml_branch_coverage=1 00:08:55.984 --rc genhtml_function_coverage=1 00:08:55.984 --rc genhtml_legend=1 00:08:55.984 --rc geninfo_all_blocks=1 00:08:55.984 --rc geninfo_unexecuted_blocks=1 00:08:55.984 00:08:55.984 ' 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.984 --rc genhtml_branch_coverage=1 00:08:55.984 --rc genhtml_function_coverage=1 00:08:55.984 --rc genhtml_legend=1 00:08:55.984 --rc geninfo_all_blocks=1 00:08:55.984 --rc geninfo_unexecuted_blocks=1 00:08:55.984 00:08:55.984 ' 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.984 --rc genhtml_branch_coverage=1 00:08:55.984 --rc genhtml_function_coverage=1 00:08:55.984 --rc genhtml_legend=1 00:08:55.984 --rc geninfo_all_blocks=1 00:08:55.984 --rc geninfo_unexecuted_blocks=1 00:08:55.984 00:08:55.984 ' 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.984 --rc genhtml_branch_coverage=1 00:08:55.984 --rc genhtml_function_coverage=1 00:08:55.984 --rc genhtml_legend=1 00:08:55.984 --rc geninfo_all_blocks=1 00:08:55.984 --rc geninfo_unexecuted_blocks=1 00:08:55.984 00:08:55.984 ' 00:08:55.984 18:12:49 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.984 18:12:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.984 18:12:49 env -- common/autotest_common.sh@10 -- # set +x 00:08:55.984 ************************************ 00:08:55.984 START TEST env_memory 00:08:55.984 ************************************ 00:08:55.984 18:12:49 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:55.984 00:08:55.984 00:08:55.984 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.984 http://cunit.sourceforge.net/ 00:08:55.984 00:08:55.984 00:08:55.984 Suite: memory 00:08:56.252 Test: alloc and free memory map ...[2024-11-26 18:12:49.345416] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:56.252 passed 00:08:56.252 Test: mem map translation ...[2024-11-26 18:12:49.396833] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:56.252 [2024-11-26 18:12:49.396890] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:56.252 [2024-11-26 18:12:49.396992] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:56.252 [2024-11-26 18:12:49.397016] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:56.252 passed 00:08:56.252 Test: mem map registration ...[2024-11-26 18:12:49.483270] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:56.252 [2024-11-26 18:12:49.483381] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:56.252 passed 00:08:56.510 Test: mem map adjacent registrations ...passed 00:08:56.510 00:08:56.510 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.510 suites 1 1 n/a 0 0 00:08:56.510 tests 4 4 4 0 0 00:08:56.510 asserts 152 152 152 0 n/a 00:08:56.510 00:08:56.510 Elapsed time = 0.290 seconds 00:08:56.510 00:08:56.510 real 0m0.323s 00:08:56.510 user 0m0.294s 00:08:56.510 sys 0m0.022s 00:08:56.510 18:12:49 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.510 18:12:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:56.510 ************************************ 00:08:56.510 END TEST env_memory 00:08:56.510 ************************************ 00:08:56.510 18:12:49 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:56.510 18:12:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.510 18:12:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.510 18:12:49 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.510 ************************************ 00:08:56.510 START TEST env_vtophys 00:08:56.510 ************************************ 00:08:56.510 18:12:49 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:56.510 EAL: lib.eal log level changed from notice to debug 00:08:56.510 EAL: Detected lcore 0 as core 0 on socket 0 00:08:56.510 EAL: Detected lcore 1 as core 0 on socket 0 00:08:56.510 EAL: Detected lcore 2 as core 0 on socket 0 00:08:56.510 EAL: Detected lcore 3 as core 0 on socket 0 00:08:56.510 EAL: Detected lcore 4 as core 0 on socket 0 00:08:56.510 EAL: Detected lcore 5 as core 0 on socket 0 00:08:56.510 EAL: Detected lcore 6 as core 0 on socket 0 00:08:56.510 EAL: Detected lcore 7 as core 0 on socket 0 00:08:56.510 EAL: Detected lcore 8 as core 0 on socket 0 00:08:56.510 EAL: Detected lcore 9 as core 0 on socket 0 00:08:56.510 EAL: Maximum logical cores by configuration: 128 00:08:56.510 EAL: Detected CPU lcores: 10 00:08:56.510 EAL: Detected NUMA nodes: 1 00:08:56.510 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:56.510 EAL: Detected shared linkage of DPDK 00:08:56.510 EAL: No shared files mode enabled, IPC will be disabled 00:08:56.510 EAL: Selected IOVA mode 'PA' 00:08:56.510 EAL: Probing VFIO support... 00:08:56.510 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:56.510 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:56.510 EAL: Ask a virtual area of 0x2e000 bytes 00:08:56.510 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:56.510 EAL: Setting up physically contiguous memory... 00:08:56.510 EAL: Setting maximum number of open files to 524288 00:08:56.510 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:56.510 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:56.510 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.510 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:56.510 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.510 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.510 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:56.510 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:56.510 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.510 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:56.510 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.510 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.510 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:56.510 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:56.510 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.510 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:56.510 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.510 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.510 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:56.510 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:56.510 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.510 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:56.510 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.510 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.510 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:56.510 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:56.510 EAL: Hugepages will be freed exactly as allocated. 00:08:56.511 EAL: No shared files mode enabled, IPC is disabled 00:08:56.511 EAL: No shared files mode enabled, IPC is disabled 00:08:56.771 EAL: TSC frequency is ~2290000 KHz 00:08:56.771 EAL: Main lcore 0 is ready (tid=7f4038908a40;cpuset=[0]) 00:08:56.771 EAL: Trying to obtain current memory policy. 00:08:56.771 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.771 EAL: Restoring previous memory policy: 0 00:08:56.771 EAL: request: mp_malloc_sync 00:08:56.771 EAL: No shared files mode enabled, IPC is disabled 00:08:56.771 EAL: Heap on socket 0 was expanded by 2MB 00:08:56.771 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:56.771 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:56.771 EAL: Mem event callback 'spdk:(nil)' registered 00:08:56.771 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:56.771 00:08:56.771 00:08:56.771 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.771 http://cunit.sourceforge.net/ 00:08:56.771 00:08:56.771 00:08:56.771 Suite: components_suite 00:08:57.028 Test: vtophys_malloc_test ...passed 00:08:57.028 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:57.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.028 EAL: Restoring previous memory policy: 4 00:08:57.028 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.028 EAL: request: mp_malloc_sync 00:08:57.028 EAL: No shared files mode enabled, IPC is disabled 00:08:57.028 EAL: Heap on socket 0 was expanded by 4MB 00:08:57.028 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.028 EAL: request: mp_malloc_sync 00:08:57.028 EAL: No shared files mode enabled, IPC is disabled 00:08:57.028 EAL: Heap on socket 0 was shrunk by 4MB 00:08:57.028 EAL: Trying to obtain current memory policy. 00:08:57.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.028 EAL: Restoring previous memory policy: 4 00:08:57.028 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.028 EAL: request: mp_malloc_sync 00:08:57.028 EAL: No shared files mode enabled, IPC is disabled 00:08:57.028 EAL: Heap on socket 0 was expanded by 6MB 00:08:57.028 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.028 EAL: request: mp_malloc_sync 00:08:57.028 EAL: No shared files mode enabled, IPC is disabled 00:08:57.028 EAL: Heap on socket 0 was shrunk by 6MB 00:08:57.028 EAL: Trying to obtain current memory policy. 00:08:57.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.028 EAL: Restoring previous memory policy: 4 00:08:57.028 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.028 EAL: request: mp_malloc_sync 00:08:57.028 EAL: No shared files mode enabled, IPC is disabled 00:08:57.028 EAL: Heap on socket 0 was expanded by 10MB 00:08:57.028 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.028 EAL: request: mp_malloc_sync 00:08:57.028 EAL: No shared files mode enabled, IPC is disabled 00:08:57.028 EAL: Heap on socket 0 was shrunk by 10MB 00:08:57.028 EAL: Trying to obtain current memory policy. 00:08:57.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.028 EAL: Restoring previous memory policy: 4 00:08:57.028 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.028 EAL: request: mp_malloc_sync 00:08:57.028 EAL: No shared files mode enabled, IPC is disabled 00:08:57.028 EAL: Heap on socket 0 was expanded by 18MB 00:08:57.286 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.286 EAL: request: mp_malloc_sync 00:08:57.286 EAL: No shared files mode enabled, IPC is disabled 00:08:57.286 EAL: Heap on socket 0 was shrunk by 18MB 00:08:57.286 EAL: Trying to obtain current memory policy. 00:08:57.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.286 EAL: Restoring previous memory policy: 4 00:08:57.286 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.286 EAL: request: mp_malloc_sync 00:08:57.286 EAL: No shared files mode enabled, IPC is disabled 00:08:57.286 EAL: Heap on socket 0 was expanded by 34MB 00:08:57.286 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.286 EAL: request: mp_malloc_sync 00:08:57.286 EAL: No shared files mode enabled, IPC is disabled 00:08:57.286 EAL: Heap on socket 0 was shrunk by 34MB 00:08:57.286 EAL: Trying to obtain current memory policy. 00:08:57.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.286 EAL: Restoring previous memory policy: 4 00:08:57.286 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.286 EAL: request: mp_malloc_sync 00:08:57.286 EAL: No shared files mode enabled, IPC is disabled 00:08:57.286 EAL: Heap on socket 0 was expanded by 66MB 00:08:57.543 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.543 EAL: request: mp_malloc_sync 00:08:57.543 EAL: No shared files mode enabled, IPC is disabled 00:08:57.543 EAL: Heap on socket 0 was shrunk by 66MB 00:08:57.543 EAL: Trying to obtain current memory policy. 00:08:57.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.801 EAL: Restoring previous memory policy: 4 00:08:57.801 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.801 EAL: request: mp_malloc_sync 00:08:57.801 EAL: No shared files mode enabled, IPC is disabled 00:08:57.801 EAL: Heap on socket 0 was expanded by 130MB 00:08:58.059 EAL: Calling mem event callback 'spdk:(nil)' 00:08:58.059 EAL: request: mp_malloc_sync 00:08:58.059 EAL: No shared files mode enabled, IPC is disabled 00:08:58.059 EAL: Heap on socket 0 was shrunk by 130MB 00:08:58.316 EAL: Trying to obtain current memory policy. 00:08:58.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:58.316 EAL: Restoring previous memory policy: 4 00:08:58.316 EAL: Calling mem event callback 'spdk:(nil)' 00:08:58.316 EAL: request: mp_malloc_sync 00:08:58.316 EAL: No shared files mode enabled, IPC is disabled 00:08:58.316 EAL: Heap on socket 0 was expanded by 258MB 00:08:58.879 EAL: Calling mem event callback 'spdk:(nil)' 00:08:58.879 EAL: request: mp_malloc_sync 00:08:58.879 EAL: No shared files mode enabled, IPC is disabled 00:08:58.879 EAL: Heap on socket 0 was shrunk by 258MB 00:08:59.443 EAL: Trying to obtain current memory policy. 00:08:59.443 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:59.443 EAL: Restoring previous memory policy: 4 00:08:59.443 EAL: Calling mem event callback 'spdk:(nil)' 00:08:59.443 EAL: request: mp_malloc_sync 00:08:59.443 EAL: No shared files mode enabled, IPC is disabled 00:08:59.443 EAL: Heap on socket 0 was expanded by 514MB 00:09:00.817 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.817 EAL: request: mp_malloc_sync 00:09:00.817 EAL: No shared files mode enabled, IPC is disabled 00:09:00.817 EAL: Heap on socket 0 was shrunk by 514MB 00:09:01.754 EAL: Trying to obtain current memory policy. 00:09:01.754 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.754 EAL: Restoring previous memory policy: 4 00:09:01.754 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.754 EAL: request: mp_malloc_sync 00:09:01.754 EAL: No shared files mode enabled, IPC is disabled 00:09:01.754 EAL: Heap on socket 0 was expanded by 1026MB 00:09:04.285 EAL: Calling mem event callback 'spdk:(nil)' 00:09:04.285 EAL: request: mp_malloc_sync 00:09:04.285 EAL: No shared files mode enabled, IPC is disabled 00:09:04.285 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:06.184 passed 00:09:06.184 00:09:06.184 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.184 suites 1 1 n/a 0 0 00:09:06.184 tests 2 2 2 0 0 00:09:06.184 asserts 5747 5747 5747 0 n/a 00:09:06.184 00:09:06.184 Elapsed time = 9.428 seconds 00:09:06.184 EAL: Calling mem event callback 'spdk:(nil)' 00:09:06.184 EAL: request: mp_malloc_sync 00:09:06.184 EAL: No shared files mode enabled, IPC is disabled 00:09:06.184 EAL: Heap on socket 0 was shrunk by 2MB 00:09:06.184 EAL: No shared files mode enabled, IPC is disabled 00:09:06.184 EAL: No shared files mode enabled, IPC is disabled 00:09:06.184 EAL: No shared files mode enabled, IPC is disabled 00:09:06.184 00:09:06.184 real 0m9.761s 00:09:06.184 user 0m8.745s 00:09:06.184 sys 0m0.854s 00:09:06.184 18:12:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.184 18:12:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:06.184 ************************************ 00:09:06.184 END TEST env_vtophys 00:09:06.184 ************************************ 00:09:06.184 18:12:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:06.184 18:12:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.184 18:12:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.184 18:12:59 env -- common/autotest_common.sh@10 -- # set +x 00:09:06.184 ************************************ 00:09:06.184 START TEST env_pci 00:09:06.184 ************************************ 00:09:06.184 18:12:59 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:06.442 00:09:06.442 00:09:06.442 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.442 http://cunit.sourceforge.net/ 00:09:06.442 00:09:06.442 00:09:06.442 Suite: pci 00:09:06.442 Test: pci_hook ...[2024-11-26 18:12:59.525108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57932 has claimed it 00:09:06.442 passed 00:09:06.442 00:09:06.442 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.442 suites 1 1 n/a 0 0 00:09:06.442 tests 1 1 1 0 0 00:09:06.442 asserts 25 25 25 0 n/a 00:09:06.442 00:09:06.442 Elapsed time = 0.013 secondsEAL: Cannot find device (10000:00:01.0) 00:09:06.442 EAL: Failed to attach device on primary process 00:09:06.442 00:09:06.442 00:09:06.442 real 0m0.099s 00:09:06.442 user 0m0.038s 00:09:06.442 sys 0m0.059s 00:09:06.442 18:12:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.442 18:12:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:06.442 ************************************ 00:09:06.442 END TEST env_pci 00:09:06.442 ************************************ 00:09:06.442 18:12:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:06.442 18:12:59 env -- env/env.sh@15 -- # uname 00:09:06.442 18:12:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:06.442 18:12:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:06.442 18:12:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:06.442 18:12:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:06.442 18:12:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.442 18:12:59 env -- common/autotest_common.sh@10 -- # set +x 00:09:06.442 ************************************ 00:09:06.442 START TEST env_dpdk_post_init 00:09:06.442 ************************************ 00:09:06.442 18:12:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:06.442 EAL: Detected CPU lcores: 10 00:09:06.442 EAL: Detected NUMA nodes: 1 00:09:06.442 EAL: Detected shared linkage of DPDK 00:09:06.442 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:06.442 EAL: Selected IOVA mode 'PA' 00:09:06.701 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:06.701 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:06.701 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:06.701 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:09:06.701 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:09:06.701 Starting DPDK initialization... 00:09:06.701 Starting SPDK post initialization... 00:09:06.701 SPDK NVMe probe 00:09:06.701 Attaching to 0000:00:10.0 00:09:06.701 Attaching to 0000:00:11.0 00:09:06.701 Attaching to 0000:00:12.0 00:09:06.701 Attaching to 0000:00:13.0 00:09:06.701 Attached to 0000:00:10.0 00:09:06.701 Attached to 0000:00:11.0 00:09:06.701 Attached to 0000:00:13.0 00:09:06.701 Attached to 0000:00:12.0 00:09:06.701 Cleaning up... 00:09:06.701 00:09:06.701 real 0m0.325s 00:09:06.701 user 0m0.122s 00:09:06.701 sys 0m0.105s 00:09:06.701 18:12:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.701 18:12:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:06.701 ************************************ 00:09:06.701 END TEST env_dpdk_post_init 00:09:06.701 ************************************ 00:09:06.701 18:13:00 env -- env/env.sh@26 -- # uname 00:09:06.701 18:13:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:06.701 18:13:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:06.701 18:13:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.701 18:13:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.701 18:13:00 env -- common/autotest_common.sh@10 -- # set +x 00:09:06.960 ************************************ 00:09:06.960 START TEST env_mem_callbacks 00:09:06.960 ************************************ 00:09:06.960 18:13:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:06.960 EAL: Detected CPU lcores: 10 00:09:06.960 EAL: Detected NUMA nodes: 1 00:09:06.960 EAL: Detected shared linkage of DPDK 00:09:06.960 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:06.960 EAL: Selected IOVA mode 'PA' 00:09:06.960 00:09:06.960 00:09:06.960 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.960 http://cunit.sourceforge.net/ 00:09:06.960 00:09:06.960 00:09:06.960 Suite: memory 00:09:06.960 Test: test ... 00:09:06.960 register 0x200000200000 2097152 00:09:06.960 malloc 3145728 00:09:06.960 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:06.960 register 0x200000400000 4194304 00:09:06.960 buf 0x2000004fffc0 len 3145728 PASSED 00:09:06.960 malloc 64 00:09:06.960 buf 0x2000004ffec0 len 64 PASSED 00:09:06.960 malloc 4194304 00:09:06.960 register 0x200000800000 6291456 00:09:06.960 buf 0x2000009fffc0 len 4194304 PASSED 00:09:06.960 free 0x2000004fffc0 3145728 00:09:06.960 free 0x2000004ffec0 64 00:09:06.960 unregister 0x200000400000 4194304 PASSED 00:09:06.960 free 0x2000009fffc0 4194304 00:09:06.960 unregister 0x200000800000 6291456 PASSED 00:09:06.960 malloc 8388608 00:09:06.960 register 0x200000400000 10485760 00:09:06.960 buf 0x2000005fffc0 len 8388608 PASSED 00:09:06.960 free 0x2000005fffc0 8388608 00:09:06.960 unregister 0x200000400000 10485760 PASSED 00:09:07.218 passed 00:09:07.218 00:09:07.218 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.218 suites 1 1 n/a 0 0 00:09:07.218 tests 1 1 1 0 0 00:09:07.218 asserts 15 15 15 0 n/a 00:09:07.218 00:09:07.218 Elapsed time = 0.098 seconds 00:09:07.218 00:09:07.218 real 0m0.294s 00:09:07.218 user 0m0.128s 00:09:07.218 sys 0m0.064s 00:09:07.218 18:13:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.218 18:13:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:07.218 ************************************ 00:09:07.218 END TEST env_mem_callbacks 00:09:07.218 ************************************ 00:09:07.218 00:09:07.218 real 0m11.322s 00:09:07.218 user 0m9.536s 00:09:07.218 sys 0m1.444s 00:09:07.218 18:13:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.218 18:13:00 env -- common/autotest_common.sh@10 -- # set +x 00:09:07.218 ************************************ 00:09:07.218 END TEST env 00:09:07.218 ************************************ 00:09:07.218 18:13:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:07.218 18:13:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.218 18:13:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.218 18:13:00 -- common/autotest_common.sh@10 -- # set +x 00:09:07.218 ************************************ 00:09:07.218 START TEST rpc 00:09:07.218 ************************************ 00:09:07.218 18:13:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:07.218 * Looking for test storage... 00:09:07.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:07.219 18:13:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:07.219 18:13:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:07.219 18:13:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:07.483 18:13:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.483 18:13:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.483 18:13:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.483 18:13:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.483 18:13:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.483 18:13:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.483 18:13:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.483 18:13:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.483 18:13:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.483 18:13:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.483 18:13:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.483 18:13:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:07.483 18:13:00 rpc -- scripts/common.sh@345 -- # : 1 00:09:07.483 18:13:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.483 18:13:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.483 18:13:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:07.483 18:13:00 rpc -- scripts/common.sh@353 -- # local d=1 00:09:07.483 18:13:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.483 18:13:00 rpc -- scripts/common.sh@355 -- # echo 1 00:09:07.483 18:13:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.483 18:13:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:07.483 18:13:00 rpc -- scripts/common.sh@353 -- # local d=2 00:09:07.483 18:13:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.483 18:13:00 rpc -- scripts/common.sh@355 -- # echo 2 00:09:07.483 18:13:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.483 18:13:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.483 18:13:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.483 18:13:00 rpc -- scripts/common.sh@368 -- # return 0 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:07.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.483 --rc genhtml_branch_coverage=1 00:09:07.483 --rc genhtml_function_coverage=1 00:09:07.483 --rc genhtml_legend=1 00:09:07.483 --rc geninfo_all_blocks=1 00:09:07.483 --rc geninfo_unexecuted_blocks=1 00:09:07.483 00:09:07.483 ' 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:07.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.483 --rc genhtml_branch_coverage=1 00:09:07.483 --rc genhtml_function_coverage=1 00:09:07.483 --rc genhtml_legend=1 00:09:07.483 --rc geninfo_all_blocks=1 00:09:07.483 --rc geninfo_unexecuted_blocks=1 00:09:07.483 00:09:07.483 ' 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:07.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.483 --rc genhtml_branch_coverage=1 00:09:07.483 --rc genhtml_function_coverage=1 00:09:07.483 --rc genhtml_legend=1 00:09:07.483 --rc geninfo_all_blocks=1 00:09:07.483 --rc geninfo_unexecuted_blocks=1 00:09:07.483 00:09:07.483 ' 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:07.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.483 --rc genhtml_branch_coverage=1 00:09:07.483 --rc genhtml_function_coverage=1 00:09:07.483 --rc genhtml_legend=1 00:09:07.483 --rc geninfo_all_blocks=1 00:09:07.483 --rc geninfo_unexecuted_blocks=1 00:09:07.483 00:09:07.483 ' 00:09:07.483 18:13:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58059 00:09:07.483 18:13:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:07.483 18:13:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:07.483 18:13:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58059 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 58059 ']' 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.483 18:13:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.483 [2024-11-26 18:13:00.730559] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:09:07.483 [2024-11-26 18:13:00.730714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58059 ] 00:09:07.741 [2024-11-26 18:13:00.894013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.741 [2024-11-26 18:13:01.029585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:07.741 [2024-11-26 18:13:01.029649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58059' to capture a snapshot of events at runtime. 00:09:07.741 [2024-11-26 18:13:01.029662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.741 [2024-11-26 18:13:01.029674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.741 [2024-11-26 18:13:01.029684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58059 for offline analysis/debug. 00:09:07.741 [2024-11-26 18:13:01.031158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.118 18:13:02 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.118 18:13:02 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:09.118 18:13:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:09.118 18:13:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:09.118 18:13:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:09.118 18:13:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:09.118 18:13:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.118 18:13:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.118 18:13:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.118 ************************************ 00:09:09.118 START TEST rpc_integrity 00:09:09.118 ************************************ 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:09.118 { 00:09:09.118 "name": "Malloc0", 00:09:09.118 "aliases": [ 00:09:09.118 "86ca2ff2-1b8f-41ba-bf59-9b4798bd8b0b" 00:09:09.118 ], 00:09:09.118 "product_name": "Malloc disk", 00:09:09.118 "block_size": 512, 00:09:09.118 "num_blocks": 16384, 00:09:09.118 "uuid": "86ca2ff2-1b8f-41ba-bf59-9b4798bd8b0b", 00:09:09.118 "assigned_rate_limits": { 00:09:09.118 "rw_ios_per_sec": 0, 00:09:09.118 "rw_mbytes_per_sec": 0, 00:09:09.118 "r_mbytes_per_sec": 0, 00:09:09.118 "w_mbytes_per_sec": 0 00:09:09.118 }, 00:09:09.118 "claimed": false, 00:09:09.118 "zoned": false, 00:09:09.118 "supported_io_types": { 00:09:09.118 "read": true, 00:09:09.118 "write": true, 00:09:09.118 "unmap": true, 00:09:09.118 "flush": true, 00:09:09.118 "reset": true, 00:09:09.118 "nvme_admin": false, 00:09:09.118 "nvme_io": false, 00:09:09.118 "nvme_io_md": false, 00:09:09.118 "write_zeroes": true, 00:09:09.118 "zcopy": true, 00:09:09.118 "get_zone_info": false, 00:09:09.118 "zone_management": false, 00:09:09.118 "zone_append": false, 00:09:09.118 "compare": false, 00:09:09.118 "compare_and_write": false, 00:09:09.118 "abort": true, 00:09:09.118 "seek_hole": false, 00:09:09.118 "seek_data": false, 00:09:09.118 "copy": true, 00:09:09.118 "nvme_iov_md": false 00:09:09.118 }, 00:09:09.118 "memory_domains": [ 00:09:09.118 { 00:09:09.118 "dma_device_id": "system", 00:09:09.118 "dma_device_type": 1 00:09:09.118 }, 00:09:09.118 { 00:09:09.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.118 "dma_device_type": 2 00:09:09.118 } 00:09:09.118 ], 00:09:09.118 "driver_specific": {} 00:09:09.118 } 00:09:09.118 ]' 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.118 [2024-11-26 18:13:02.214944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:09.118 [2024-11-26 18:13:02.215041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.118 [2024-11-26 18:13:02.215074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:09.118 [2024-11-26 18:13:02.215089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.118 [2024-11-26 18:13:02.217843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.118 [2024-11-26 18:13:02.217912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:09.118 Passthru0 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.118 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.118 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:09.118 { 00:09:09.118 "name": "Malloc0", 00:09:09.118 "aliases": [ 00:09:09.118 "86ca2ff2-1b8f-41ba-bf59-9b4798bd8b0b" 00:09:09.119 ], 00:09:09.119 "product_name": "Malloc disk", 00:09:09.119 "block_size": 512, 00:09:09.119 "num_blocks": 16384, 00:09:09.119 "uuid": "86ca2ff2-1b8f-41ba-bf59-9b4798bd8b0b", 00:09:09.119 "assigned_rate_limits": { 00:09:09.119 "rw_ios_per_sec": 0, 00:09:09.119 "rw_mbytes_per_sec": 0, 00:09:09.119 "r_mbytes_per_sec": 0, 00:09:09.119 "w_mbytes_per_sec": 0 00:09:09.119 }, 00:09:09.119 "claimed": true, 00:09:09.119 "claim_type": "exclusive_write", 00:09:09.119 "zoned": false, 00:09:09.119 "supported_io_types": { 00:09:09.119 "read": true, 00:09:09.119 "write": true, 00:09:09.119 "unmap": true, 00:09:09.119 "flush": true, 00:09:09.119 "reset": true, 00:09:09.119 "nvme_admin": false, 00:09:09.119 "nvme_io": false, 00:09:09.119 "nvme_io_md": false, 00:09:09.119 "write_zeroes": true, 00:09:09.119 "zcopy": true, 00:09:09.119 "get_zone_info": false, 00:09:09.119 "zone_management": false, 00:09:09.119 "zone_append": false, 00:09:09.119 "compare": false, 00:09:09.119 "compare_and_write": false, 00:09:09.119 "abort": true, 00:09:09.119 "seek_hole": false, 00:09:09.119 "seek_data": false, 00:09:09.119 "copy": true, 00:09:09.119 "nvme_iov_md": false 00:09:09.119 }, 00:09:09.119 "memory_domains": [ 00:09:09.119 { 00:09:09.119 "dma_device_id": "system", 00:09:09.119 "dma_device_type": 1 00:09:09.119 }, 00:09:09.119 { 00:09:09.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.119 "dma_device_type": 2 00:09:09.119 } 00:09:09.119 ], 00:09:09.119 "driver_specific": {} 00:09:09.119 }, 00:09:09.119 { 00:09:09.119 "name": "Passthru0", 00:09:09.119 "aliases": [ 00:09:09.119 "caaa41f2-d9e3-5ad9-b134-305dae91b06b" 00:09:09.119 ], 00:09:09.119 "product_name": "passthru", 00:09:09.119 "block_size": 512, 00:09:09.119 "num_blocks": 16384, 00:09:09.119 "uuid": "caaa41f2-d9e3-5ad9-b134-305dae91b06b", 00:09:09.119 "assigned_rate_limits": { 00:09:09.119 "rw_ios_per_sec": 0, 00:09:09.119 "rw_mbytes_per_sec": 0, 00:09:09.119 "r_mbytes_per_sec": 0, 00:09:09.119 "w_mbytes_per_sec": 0 00:09:09.119 }, 00:09:09.119 "claimed": false, 00:09:09.119 "zoned": false, 00:09:09.119 "supported_io_types": { 00:09:09.119 "read": true, 00:09:09.119 "write": true, 00:09:09.119 "unmap": true, 00:09:09.119 "flush": true, 00:09:09.119 "reset": true, 00:09:09.119 "nvme_admin": false, 00:09:09.119 "nvme_io": false, 00:09:09.119 "nvme_io_md": false, 00:09:09.119 "write_zeroes": true, 00:09:09.119 "zcopy": true, 00:09:09.119 "get_zone_info": false, 00:09:09.119 "zone_management": false, 00:09:09.119 "zone_append": false, 00:09:09.119 "compare": false, 00:09:09.119 "compare_and_write": false, 00:09:09.119 "abort": true, 00:09:09.119 "seek_hole": false, 00:09:09.119 "seek_data": false, 00:09:09.119 "copy": true, 00:09:09.119 "nvme_iov_md": false 00:09:09.119 }, 00:09:09.119 "memory_domains": [ 00:09:09.119 { 00:09:09.119 "dma_device_id": "system", 00:09:09.119 "dma_device_type": 1 00:09:09.119 }, 00:09:09.119 { 00:09:09.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.119 "dma_device_type": 2 00:09:09.119 } 00:09:09.119 ], 00:09:09.119 "driver_specific": { 00:09:09.119 "passthru": { 00:09:09.119 "name": "Passthru0", 00:09:09.119 "base_bdev_name": "Malloc0" 00:09:09.119 } 00:09:09.119 } 00:09:09.119 } 00:09:09.119 ]' 00:09:09.119 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:09.119 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:09.119 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.119 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.119 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.119 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:09.119 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:09.119 18:13:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:09.119 00:09:09.119 real 0m0.333s 00:09:09.119 user 0m0.183s 00:09:09.119 sys 0m0.035s 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.119 18:13:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.119 ************************************ 00:09:09.119 END TEST rpc_integrity 00:09:09.119 ************************************ 00:09:09.119 18:13:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:09.119 18:13:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.119 18:13:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.119 18:13:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.119 ************************************ 00:09:09.119 START TEST rpc_plugins 00:09:09.119 ************************************ 00:09:09.119 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:09.119 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:09.119 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.119 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.378 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:09.378 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.378 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:09.378 { 00:09:09.378 "name": "Malloc1", 00:09:09.378 "aliases": [ 00:09:09.378 "d3d3fa2f-0b7a-4a08-beb7-2222bf4f0399" 00:09:09.378 ], 00:09:09.378 "product_name": "Malloc disk", 00:09:09.378 "block_size": 4096, 00:09:09.378 "num_blocks": 256, 00:09:09.378 "uuid": "d3d3fa2f-0b7a-4a08-beb7-2222bf4f0399", 00:09:09.378 "assigned_rate_limits": { 00:09:09.378 "rw_ios_per_sec": 0, 00:09:09.378 "rw_mbytes_per_sec": 0, 00:09:09.378 "r_mbytes_per_sec": 0, 00:09:09.378 "w_mbytes_per_sec": 0 00:09:09.378 }, 00:09:09.378 "claimed": false, 00:09:09.378 "zoned": false, 00:09:09.378 "supported_io_types": { 00:09:09.378 "read": true, 00:09:09.378 "write": true, 00:09:09.378 "unmap": true, 00:09:09.378 "flush": true, 00:09:09.378 "reset": true, 00:09:09.378 "nvme_admin": false, 00:09:09.378 "nvme_io": false, 00:09:09.378 "nvme_io_md": false, 00:09:09.378 "write_zeroes": true, 00:09:09.378 "zcopy": true, 00:09:09.378 "get_zone_info": false, 00:09:09.378 "zone_management": false, 00:09:09.378 "zone_append": false, 00:09:09.378 "compare": false, 00:09:09.378 "compare_and_write": false, 00:09:09.378 "abort": true, 00:09:09.378 "seek_hole": false, 00:09:09.378 "seek_data": false, 00:09:09.378 "copy": true, 00:09:09.378 "nvme_iov_md": false 00:09:09.378 }, 00:09:09.378 "memory_domains": [ 00:09:09.378 { 00:09:09.378 "dma_device_id": "system", 00:09:09.378 "dma_device_type": 1 00:09:09.378 }, 00:09:09.378 { 00:09:09.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.378 "dma_device_type": 2 00:09:09.378 } 00:09:09.378 ], 00:09:09.378 "driver_specific": {} 00:09:09.378 } 00:09:09.378 ]' 00:09:09.378 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:09.378 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:09.378 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.378 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.378 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:09.378 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:09.378 18:13:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:09.378 00:09:09.378 real 0m0.185s 00:09:09.378 user 0m0.111s 00:09:09.378 sys 0m0.025s 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.378 18:13:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:09.378 ************************************ 00:09:09.378 END TEST rpc_plugins 00:09:09.378 ************************************ 00:09:09.378 18:13:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:09.378 18:13:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.378 18:13:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.378 18:13:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.378 ************************************ 00:09:09.378 START TEST rpc_trace_cmd_test 00:09:09.378 ************************************ 00:09:09.378 18:13:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:09.378 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:09.378 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:09.378 18:13:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.378 18:13:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.378 18:13:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.378 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:09.378 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58059", 00:09:09.378 "tpoint_group_mask": "0x8", 00:09:09.378 "iscsi_conn": { 00:09:09.379 "mask": "0x2", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "scsi": { 00:09:09.379 "mask": "0x4", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "bdev": { 00:09:09.379 "mask": "0x8", 00:09:09.379 "tpoint_mask": "0xffffffffffffffff" 00:09:09.379 }, 00:09:09.379 "nvmf_rdma": { 00:09:09.379 "mask": "0x10", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "nvmf_tcp": { 00:09:09.379 "mask": "0x20", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "ftl": { 00:09:09.379 "mask": "0x40", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "blobfs": { 00:09:09.379 "mask": "0x80", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "dsa": { 00:09:09.379 "mask": "0x200", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "thread": { 00:09:09.379 "mask": "0x400", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "nvme_pcie": { 00:09:09.379 "mask": "0x800", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "iaa": { 00:09:09.379 "mask": "0x1000", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "nvme_tcp": { 00:09:09.379 "mask": "0x2000", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "bdev_nvme": { 00:09:09.379 "mask": "0x4000", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "sock": { 00:09:09.379 "mask": "0x8000", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "blob": { 00:09:09.379 "mask": "0x10000", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "bdev_raid": { 00:09:09.379 "mask": "0x20000", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 }, 00:09:09.379 "scheduler": { 00:09:09.379 "mask": "0x40000", 00:09:09.379 "tpoint_mask": "0x0" 00:09:09.379 } 00:09:09.379 }' 00:09:09.379 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:09.638 00:09:09.638 real 0m0.235s 00:09:09.638 user 0m0.194s 00:09:09.638 sys 0m0.031s 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.638 18:13:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.638 ************************************ 00:09:09.638 END TEST rpc_trace_cmd_test 00:09:09.638 ************************************ 00:09:09.638 18:13:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:09.638 18:13:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:09.638 18:13:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:09.638 18:13:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.638 18:13:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.638 18:13:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.896 ************************************ 00:09:09.896 START TEST rpc_daemon_integrity 00:09:09.896 ************************************ 00:09:09.896 18:13:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:09.896 18:13:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:09.896 18:13:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.896 18:13:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.896 18:13:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.896 18:13:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:09.896 18:13:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:09.896 { 00:09:09.896 "name": "Malloc2", 00:09:09.896 "aliases": [ 00:09:09.896 "6856c288-572c-40b3-88cd-de856f71e561" 00:09:09.896 ], 00:09:09.896 "product_name": "Malloc disk", 00:09:09.896 "block_size": 512, 00:09:09.896 "num_blocks": 16384, 00:09:09.896 "uuid": "6856c288-572c-40b3-88cd-de856f71e561", 00:09:09.896 "assigned_rate_limits": { 00:09:09.896 "rw_ios_per_sec": 0, 00:09:09.896 "rw_mbytes_per_sec": 0, 00:09:09.896 "r_mbytes_per_sec": 0, 00:09:09.896 "w_mbytes_per_sec": 0 00:09:09.896 }, 00:09:09.896 "claimed": false, 00:09:09.896 "zoned": false, 00:09:09.896 "supported_io_types": { 00:09:09.896 "read": true, 00:09:09.896 "write": true, 00:09:09.896 "unmap": true, 00:09:09.896 "flush": true, 00:09:09.896 "reset": true, 00:09:09.896 "nvme_admin": false, 00:09:09.896 "nvme_io": false, 00:09:09.896 "nvme_io_md": false, 00:09:09.896 "write_zeroes": true, 00:09:09.896 "zcopy": true, 00:09:09.896 "get_zone_info": false, 00:09:09.896 "zone_management": false, 00:09:09.896 "zone_append": false, 00:09:09.896 "compare": false, 00:09:09.896 "compare_and_write": false, 00:09:09.896 "abort": true, 00:09:09.896 "seek_hole": false, 00:09:09.896 "seek_data": false, 00:09:09.896 "copy": true, 00:09:09.896 "nvme_iov_md": false 00:09:09.896 }, 00:09:09.896 "memory_domains": [ 00:09:09.896 { 00:09:09.896 "dma_device_id": "system", 00:09:09.896 "dma_device_type": 1 00:09:09.896 }, 00:09:09.896 { 00:09:09.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.896 "dma_device_type": 2 00:09:09.896 } 00:09:09.896 ], 00:09:09.896 "driver_specific": {} 00:09:09.896 } 00:09:09.896 ]' 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.896 [2024-11-26 18:13:03.125466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:09.896 [2024-11-26 18:13:03.125546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.896 [2024-11-26 18:13:03.125571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:09.896 [2024-11-26 18:13:03.125584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.896 [2024-11-26 18:13:03.128213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.896 [2024-11-26 18:13:03.128264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:09.896 Passthru0 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.896 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:09.896 { 00:09:09.896 "name": "Malloc2", 00:09:09.896 "aliases": [ 00:09:09.896 "6856c288-572c-40b3-88cd-de856f71e561" 00:09:09.896 ], 00:09:09.896 "product_name": "Malloc disk", 00:09:09.896 "block_size": 512, 00:09:09.896 "num_blocks": 16384, 00:09:09.896 "uuid": "6856c288-572c-40b3-88cd-de856f71e561", 00:09:09.896 "assigned_rate_limits": { 00:09:09.896 "rw_ios_per_sec": 0, 00:09:09.896 "rw_mbytes_per_sec": 0, 00:09:09.896 "r_mbytes_per_sec": 0, 00:09:09.896 "w_mbytes_per_sec": 0 00:09:09.896 }, 00:09:09.896 "claimed": true, 00:09:09.896 "claim_type": "exclusive_write", 00:09:09.896 "zoned": false, 00:09:09.896 "supported_io_types": { 00:09:09.896 "read": true, 00:09:09.896 "write": true, 00:09:09.896 "unmap": true, 00:09:09.896 "flush": true, 00:09:09.896 "reset": true, 00:09:09.897 "nvme_admin": false, 00:09:09.897 "nvme_io": false, 00:09:09.897 "nvme_io_md": false, 00:09:09.897 "write_zeroes": true, 00:09:09.897 "zcopy": true, 00:09:09.897 "get_zone_info": false, 00:09:09.897 "zone_management": false, 00:09:09.897 "zone_append": false, 00:09:09.897 "compare": false, 00:09:09.897 "compare_and_write": false, 00:09:09.897 "abort": true, 00:09:09.897 "seek_hole": false, 00:09:09.897 "seek_data": false, 00:09:09.897 "copy": true, 00:09:09.897 "nvme_iov_md": false 00:09:09.897 }, 00:09:09.897 "memory_domains": [ 00:09:09.897 { 00:09:09.897 "dma_device_id": "system", 00:09:09.897 "dma_device_type": 1 00:09:09.897 }, 00:09:09.897 { 00:09:09.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.897 "dma_device_type": 2 00:09:09.897 } 00:09:09.897 ], 00:09:09.897 "driver_specific": {} 00:09:09.897 }, 00:09:09.897 { 00:09:09.897 "name": "Passthru0", 00:09:09.897 "aliases": [ 00:09:09.897 "c29ed053-a093-55d4-8da6-96bfc4b53b9b" 00:09:09.897 ], 00:09:09.897 "product_name": "passthru", 00:09:09.897 "block_size": 512, 00:09:09.897 "num_blocks": 16384, 00:09:09.897 "uuid": "c29ed053-a093-55d4-8da6-96bfc4b53b9b", 00:09:09.897 "assigned_rate_limits": { 00:09:09.897 "rw_ios_per_sec": 0, 00:09:09.897 "rw_mbytes_per_sec": 0, 00:09:09.897 "r_mbytes_per_sec": 0, 00:09:09.897 "w_mbytes_per_sec": 0 00:09:09.897 }, 00:09:09.897 "claimed": false, 00:09:09.897 "zoned": false, 00:09:09.897 "supported_io_types": { 00:09:09.897 "read": true, 00:09:09.897 "write": true, 00:09:09.897 "unmap": true, 00:09:09.897 "flush": true, 00:09:09.897 "reset": true, 00:09:09.897 "nvme_admin": false, 00:09:09.897 "nvme_io": false, 00:09:09.897 "nvme_io_md": false, 00:09:09.897 "write_zeroes": true, 00:09:09.897 "zcopy": true, 00:09:09.897 "get_zone_info": false, 00:09:09.897 "zone_management": false, 00:09:09.897 "zone_append": false, 00:09:09.897 "compare": false, 00:09:09.897 "compare_and_write": false, 00:09:09.897 "abort": true, 00:09:09.897 "seek_hole": false, 00:09:09.897 "seek_data": false, 00:09:09.897 "copy": true, 00:09:09.897 "nvme_iov_md": false 00:09:09.897 }, 00:09:09.897 "memory_domains": [ 00:09:09.897 { 00:09:09.897 "dma_device_id": "system", 00:09:09.897 "dma_device_type": 1 00:09:09.897 }, 00:09:09.897 { 00:09:09.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.897 "dma_device_type": 2 00:09:09.897 } 00:09:09.897 ], 00:09:09.897 "driver_specific": { 00:09:09.897 "passthru": { 00:09:09.897 "name": "Passthru0", 00:09:09.897 "base_bdev_name": "Malloc2" 00:09:09.897 } 00:09:09.897 } 00:09:09.897 } 00:09:09.897 ]' 00:09:09.897 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:09.897 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:09.897 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:09.897 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.897 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:09.897 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.897 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:09.897 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.897 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.155 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.155 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:10.155 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.155 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.155 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.155 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:10.155 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:10.156 ************************************ 00:09:10.156 END TEST rpc_daemon_integrity 00:09:10.156 ************************************ 00:09:10.156 18:13:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:10.156 00:09:10.156 real 0m0.339s 00:09:10.156 user 0m0.191s 00:09:10.156 sys 0m0.036s 00:09:10.156 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.156 18:13:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:10.156 18:13:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:10.156 18:13:03 rpc -- rpc/rpc.sh@84 -- # killprocess 58059 00:09:10.156 18:13:03 rpc -- common/autotest_common.sh@954 -- # '[' -z 58059 ']' 00:09:10.156 18:13:03 rpc -- common/autotest_common.sh@958 -- # kill -0 58059 00:09:10.156 18:13:03 rpc -- common/autotest_common.sh@959 -- # uname 00:09:10.156 18:13:03 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.156 18:13:03 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58059 00:09:10.156 18:13:03 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.156 18:13:03 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.156 killing process with pid 58059 00:09:10.156 18:13:03 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58059' 00:09:10.156 18:13:03 rpc -- common/autotest_common.sh@973 -- # kill 58059 00:09:10.156 18:13:03 rpc -- common/autotest_common.sh@978 -- # wait 58059 00:09:13.446 00:09:13.446 real 0m5.784s 00:09:13.446 user 0m6.393s 00:09:13.446 sys 0m0.834s 00:09:13.446 18:13:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.446 18:13:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.446 ************************************ 00:09:13.446 END TEST rpc 00:09:13.446 ************************************ 00:09:13.446 18:13:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:13.446 18:13:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.446 18:13:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.446 18:13:06 -- common/autotest_common.sh@10 -- # set +x 00:09:13.446 ************************************ 00:09:13.446 START TEST skip_rpc 00:09:13.446 ************************************ 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:13.446 * Looking for test storage... 00:09:13.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.446 18:13:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:13.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.446 --rc genhtml_branch_coverage=1 00:09:13.446 --rc genhtml_function_coverage=1 00:09:13.446 --rc genhtml_legend=1 00:09:13.446 --rc geninfo_all_blocks=1 00:09:13.446 --rc geninfo_unexecuted_blocks=1 00:09:13.446 00:09:13.446 ' 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:13.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.446 --rc genhtml_branch_coverage=1 00:09:13.446 --rc genhtml_function_coverage=1 00:09:13.446 --rc genhtml_legend=1 00:09:13.446 --rc geninfo_all_blocks=1 00:09:13.446 --rc geninfo_unexecuted_blocks=1 00:09:13.446 00:09:13.446 ' 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:13.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.446 --rc genhtml_branch_coverage=1 00:09:13.446 --rc genhtml_function_coverage=1 00:09:13.446 --rc genhtml_legend=1 00:09:13.446 --rc geninfo_all_blocks=1 00:09:13.446 --rc geninfo_unexecuted_blocks=1 00:09:13.446 00:09:13.446 ' 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:13.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.446 --rc genhtml_branch_coverage=1 00:09:13.446 --rc genhtml_function_coverage=1 00:09:13.446 --rc genhtml_legend=1 00:09:13.446 --rc geninfo_all_blocks=1 00:09:13.446 --rc geninfo_unexecuted_blocks=1 00:09:13.446 00:09:13.446 ' 00:09:13.446 18:13:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:13.446 18:13:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:13.446 18:13:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.446 18:13:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.446 ************************************ 00:09:13.446 START TEST skip_rpc 00:09:13.446 ************************************ 00:09:13.446 18:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:13.446 18:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58299 00:09:13.446 18:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:13.446 18:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:13.446 18:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:13.446 [2024-11-26 18:13:06.582794] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:09:13.446 [2024-11-26 18:13:06.582939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58299 ] 00:09:13.446 [2024-11-26 18:13:06.760993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.703 [2024-11-26 18:13:06.880141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58299 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58299 ']' 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58299 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58299 00:09:18.971 killing process with pid 58299 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58299' 00:09:18.971 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58299 00:09:18.972 18:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58299 00:09:20.876 00:09:20.876 real 0m7.645s 00:09:20.876 user 0m7.177s 00:09:20.876 sys 0m0.383s 00:09:20.876 18:13:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.876 18:13:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.876 ************************************ 00:09:20.876 END TEST skip_rpc 00:09:20.876 ************************************ 00:09:20.876 18:13:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:20.876 18:13:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.876 18:13:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.876 18:13:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.876 ************************************ 00:09:20.876 START TEST skip_rpc_with_json 00:09:20.876 ************************************ 00:09:20.876 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:20.876 18:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:20.876 18:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:20.876 18:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58403 00:09:20.876 18:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:20.876 18:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58403 00:09:20.876 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58403 ']' 00:09:20.876 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.877 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.877 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.877 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.877 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:21.137 [2024-11-26 18:13:14.301279] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:09:21.137 [2024-11-26 18:13:14.301424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58403 ] 00:09:21.397 [2024-11-26 18:13:14.477371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.397 [2024-11-26 18:13:14.598877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:22.333 [2024-11-26 18:13:15.551926] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:22.333 request: 00:09:22.333 { 00:09:22.333 "trtype": "tcp", 00:09:22.333 "method": "nvmf_get_transports", 00:09:22.333 "req_id": 1 00:09:22.333 } 00:09:22.333 Got JSON-RPC error response 00:09:22.333 response: 00:09:22.333 { 00:09:22.333 "code": -19, 00:09:22.333 "message": "No such device" 00:09:22.333 } 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:22.333 [2024-11-26 18:13:15.564047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.333 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:22.615 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.615 18:13:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:22.615 { 00:09:22.615 "subsystems": [ 00:09:22.615 { 00:09:22.615 "subsystem": "fsdev", 00:09:22.615 "config": [ 00:09:22.615 { 00:09:22.615 "method": "fsdev_set_opts", 00:09:22.615 "params": { 00:09:22.615 "fsdev_io_pool_size": 65535, 00:09:22.615 "fsdev_io_cache_size": 256 00:09:22.615 } 00:09:22.615 } 00:09:22.615 ] 00:09:22.615 }, 00:09:22.615 { 00:09:22.615 "subsystem": "keyring", 00:09:22.615 "config": [] 00:09:22.615 }, 00:09:22.615 { 00:09:22.615 "subsystem": "iobuf", 00:09:22.615 "config": [ 00:09:22.615 { 00:09:22.615 "method": "iobuf_set_options", 00:09:22.615 "params": { 00:09:22.615 "small_pool_count": 8192, 00:09:22.615 "large_pool_count": 1024, 00:09:22.615 "small_bufsize": 8192, 00:09:22.615 "large_bufsize": 135168, 00:09:22.615 "enable_numa": false 00:09:22.615 } 00:09:22.615 } 00:09:22.615 ] 00:09:22.615 }, 00:09:22.615 { 00:09:22.615 "subsystem": "sock", 00:09:22.615 "config": [ 00:09:22.615 { 00:09:22.615 "method": "sock_set_default_impl", 00:09:22.615 "params": { 00:09:22.615 "impl_name": "posix" 00:09:22.615 } 00:09:22.615 }, 00:09:22.615 { 00:09:22.615 "method": "sock_impl_set_options", 00:09:22.615 "params": { 00:09:22.615 "impl_name": "ssl", 00:09:22.616 "recv_buf_size": 4096, 00:09:22.616 "send_buf_size": 4096, 00:09:22.616 "enable_recv_pipe": true, 00:09:22.616 "enable_quickack": false, 00:09:22.616 "enable_placement_id": 0, 00:09:22.616 "enable_zerocopy_send_server": true, 00:09:22.616 "enable_zerocopy_send_client": false, 00:09:22.616 "zerocopy_threshold": 0, 00:09:22.616 "tls_version": 0, 00:09:22.616 "enable_ktls": false 00:09:22.616 } 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "method": "sock_impl_set_options", 00:09:22.616 "params": { 00:09:22.616 "impl_name": "posix", 00:09:22.616 "recv_buf_size": 2097152, 00:09:22.616 "send_buf_size": 2097152, 00:09:22.616 "enable_recv_pipe": true, 00:09:22.616 "enable_quickack": false, 00:09:22.616 "enable_placement_id": 0, 00:09:22.616 "enable_zerocopy_send_server": true, 00:09:22.616 "enable_zerocopy_send_client": false, 00:09:22.616 "zerocopy_threshold": 0, 00:09:22.616 "tls_version": 0, 00:09:22.616 "enable_ktls": false 00:09:22.616 } 00:09:22.616 } 00:09:22.616 ] 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "vmd", 00:09:22.616 "config": [] 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "accel", 00:09:22.616 "config": [ 00:09:22.616 { 00:09:22.616 "method": "accel_set_options", 00:09:22.616 "params": { 00:09:22.616 "small_cache_size": 128, 00:09:22.616 "large_cache_size": 16, 00:09:22.616 "task_count": 2048, 00:09:22.616 "sequence_count": 2048, 00:09:22.616 "buf_count": 2048 00:09:22.616 } 00:09:22.616 } 00:09:22.616 ] 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "bdev", 00:09:22.616 "config": [ 00:09:22.616 { 00:09:22.616 "method": "bdev_set_options", 00:09:22.616 "params": { 00:09:22.616 "bdev_io_pool_size": 65535, 00:09:22.616 "bdev_io_cache_size": 256, 00:09:22.616 "bdev_auto_examine": true, 00:09:22.616 "iobuf_small_cache_size": 128, 00:09:22.616 "iobuf_large_cache_size": 16 00:09:22.616 } 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "method": "bdev_raid_set_options", 00:09:22.616 "params": { 00:09:22.616 "process_window_size_kb": 1024, 00:09:22.616 "process_max_bandwidth_mb_sec": 0 00:09:22.616 } 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "method": "bdev_iscsi_set_options", 00:09:22.616 "params": { 00:09:22.616 "timeout_sec": 30 00:09:22.616 } 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "method": "bdev_nvme_set_options", 00:09:22.616 "params": { 00:09:22.616 "action_on_timeout": "none", 00:09:22.616 "timeout_us": 0, 00:09:22.616 "timeout_admin_us": 0, 00:09:22.616 "keep_alive_timeout_ms": 10000, 00:09:22.616 "arbitration_burst": 0, 00:09:22.616 "low_priority_weight": 0, 00:09:22.616 "medium_priority_weight": 0, 00:09:22.616 "high_priority_weight": 0, 00:09:22.616 "nvme_adminq_poll_period_us": 10000, 00:09:22.616 "nvme_ioq_poll_period_us": 0, 00:09:22.616 "io_queue_requests": 0, 00:09:22.616 "delay_cmd_submit": true, 00:09:22.616 "transport_retry_count": 4, 00:09:22.616 "bdev_retry_count": 3, 00:09:22.616 "transport_ack_timeout": 0, 00:09:22.616 "ctrlr_loss_timeout_sec": 0, 00:09:22.616 "reconnect_delay_sec": 0, 00:09:22.616 "fast_io_fail_timeout_sec": 0, 00:09:22.616 "disable_auto_failback": false, 00:09:22.616 "generate_uuids": false, 00:09:22.616 "transport_tos": 0, 00:09:22.616 "nvme_error_stat": false, 00:09:22.616 "rdma_srq_size": 0, 00:09:22.616 "io_path_stat": false, 00:09:22.616 "allow_accel_sequence": false, 00:09:22.616 "rdma_max_cq_size": 0, 00:09:22.616 "rdma_cm_event_timeout_ms": 0, 00:09:22.616 "dhchap_digests": [ 00:09:22.616 "sha256", 00:09:22.616 "sha384", 00:09:22.616 "sha512" 00:09:22.616 ], 00:09:22.616 "dhchap_dhgroups": [ 00:09:22.616 "null", 00:09:22.616 "ffdhe2048", 00:09:22.616 "ffdhe3072", 00:09:22.616 "ffdhe4096", 00:09:22.616 "ffdhe6144", 00:09:22.616 "ffdhe8192" 00:09:22.616 ] 00:09:22.616 } 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "method": "bdev_nvme_set_hotplug", 00:09:22.616 "params": { 00:09:22.616 "period_us": 100000, 00:09:22.616 "enable": false 00:09:22.616 } 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "method": "bdev_wait_for_examine" 00:09:22.616 } 00:09:22.616 ] 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "scsi", 00:09:22.616 "config": null 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "scheduler", 00:09:22.616 "config": [ 00:09:22.616 { 00:09:22.616 "method": "framework_set_scheduler", 00:09:22.616 "params": { 00:09:22.616 "name": "static" 00:09:22.616 } 00:09:22.616 } 00:09:22.616 ] 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "vhost_scsi", 00:09:22.616 "config": [] 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "vhost_blk", 00:09:22.616 "config": [] 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "ublk", 00:09:22.616 "config": [] 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "nbd", 00:09:22.616 "config": [] 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "nvmf", 00:09:22.616 "config": [ 00:09:22.616 { 00:09:22.616 "method": "nvmf_set_config", 00:09:22.616 "params": { 00:09:22.616 "discovery_filter": "match_any", 00:09:22.616 "admin_cmd_passthru": { 00:09:22.616 "identify_ctrlr": false 00:09:22.616 }, 00:09:22.616 "dhchap_digests": [ 00:09:22.616 "sha256", 00:09:22.616 "sha384", 00:09:22.616 "sha512" 00:09:22.616 ], 00:09:22.616 "dhchap_dhgroups": [ 00:09:22.616 "null", 00:09:22.616 "ffdhe2048", 00:09:22.616 "ffdhe3072", 00:09:22.616 "ffdhe4096", 00:09:22.616 "ffdhe6144", 00:09:22.616 "ffdhe8192" 00:09:22.616 ] 00:09:22.616 } 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "method": "nvmf_set_max_subsystems", 00:09:22.616 "params": { 00:09:22.616 "max_subsystems": 1024 00:09:22.616 } 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "method": "nvmf_set_crdt", 00:09:22.616 "params": { 00:09:22.616 "crdt1": 0, 00:09:22.616 "crdt2": 0, 00:09:22.616 "crdt3": 0 00:09:22.616 } 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "method": "nvmf_create_transport", 00:09:22.616 "params": { 00:09:22.616 "trtype": "TCP", 00:09:22.616 "max_queue_depth": 128, 00:09:22.616 "max_io_qpairs_per_ctrlr": 127, 00:09:22.616 "in_capsule_data_size": 4096, 00:09:22.616 "max_io_size": 131072, 00:09:22.616 "io_unit_size": 131072, 00:09:22.616 "max_aq_depth": 128, 00:09:22.616 "num_shared_buffers": 511, 00:09:22.616 "buf_cache_size": 4294967295, 00:09:22.616 "dif_insert_or_strip": false, 00:09:22.616 "zcopy": false, 00:09:22.616 "c2h_success": true, 00:09:22.616 "sock_priority": 0, 00:09:22.616 "abort_timeout_sec": 1, 00:09:22.616 "ack_timeout": 0, 00:09:22.616 "data_wr_pool_size": 0 00:09:22.616 } 00:09:22.616 } 00:09:22.616 ] 00:09:22.616 }, 00:09:22.616 { 00:09:22.616 "subsystem": "iscsi", 00:09:22.616 "config": [ 00:09:22.616 { 00:09:22.616 "method": "iscsi_set_options", 00:09:22.616 "params": { 00:09:22.616 "node_base": "iqn.2016-06.io.spdk", 00:09:22.616 "max_sessions": 128, 00:09:22.616 "max_connections_per_session": 2, 00:09:22.616 "max_queue_depth": 64, 00:09:22.616 "default_time2wait": 2, 00:09:22.616 "default_time2retain": 20, 00:09:22.616 "first_burst_length": 8192, 00:09:22.616 "immediate_data": true, 00:09:22.616 "allow_duplicated_isid": false, 00:09:22.616 "error_recovery_level": 0, 00:09:22.616 "nop_timeout": 60, 00:09:22.616 "nop_in_interval": 30, 00:09:22.616 "disable_chap": false, 00:09:22.616 "require_chap": false, 00:09:22.616 "mutual_chap": false, 00:09:22.616 "chap_group": 0, 00:09:22.616 "max_large_datain_per_connection": 64, 00:09:22.616 "max_r2t_per_connection": 4, 00:09:22.616 "pdu_pool_size": 36864, 00:09:22.616 "immediate_data_pool_size": 16384, 00:09:22.616 "data_out_pool_size": 2048 00:09:22.616 } 00:09:22.616 } 00:09:22.616 ] 00:09:22.616 } 00:09:22.616 ] 00:09:22.616 } 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58403 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58403 ']' 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58403 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58403 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.616 killing process with pid 58403 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58403' 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58403 00:09:22.616 18:13:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58403 00:09:25.163 18:13:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58459 00:09:25.163 18:13:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:25.163 18:13:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:30.456 18:13:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58459 00:09:30.456 18:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58459 ']' 00:09:30.456 18:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58459 00:09:30.456 18:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:30.457 18:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.457 18:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58459 00:09:30.457 18:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.457 18:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.457 killing process with pid 58459 00:09:30.457 18:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58459' 00:09:30.457 18:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58459 00:09:30.457 18:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58459 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:32.989 00:09:32.989 real 0m11.978s 00:09:32.989 user 0m11.403s 00:09:32.989 sys 0m0.866s 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:32.989 ************************************ 00:09:32.989 END TEST skip_rpc_with_json 00:09:32.989 ************************************ 00:09:32.989 18:13:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:32.989 18:13:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.989 18:13:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.989 18:13:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.989 ************************************ 00:09:32.989 START TEST skip_rpc_with_delay 00:09:32.989 ************************************ 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:32.989 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.990 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:32.990 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:32.990 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:33.254 [2024-11-26 18:13:26.353971] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:33.254 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:33.254 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.254 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.254 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.254 00:09:33.254 real 0m0.196s 00:09:33.254 user 0m0.107s 00:09:33.254 sys 0m0.087s 00:09:33.254 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.254 18:13:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:33.254 ************************************ 00:09:33.254 END TEST skip_rpc_with_delay 00:09:33.254 ************************************ 00:09:33.254 18:13:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:33.254 18:13:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:33.254 18:13:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:33.254 18:13:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.254 18:13:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.254 18:13:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.254 ************************************ 00:09:33.254 START TEST exit_on_failed_rpc_init 00:09:33.254 ************************************ 00:09:33.254 18:13:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:33.254 18:13:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58598 00:09:33.254 18:13:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:33.254 18:13:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58598 00:09:33.254 18:13:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58598 ']' 00:09:33.254 18:13:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.254 18:13:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.254 18:13:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.254 18:13:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.254 18:13:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:33.512 [2024-11-26 18:13:26.612318] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:09:33.512 [2024-11-26 18:13:26.612450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58598 ] 00:09:33.512 [2024-11-26 18:13:26.789347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.770 [2024-11-26 18:13:26.911048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:34.708 18:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:34.708 [2024-11-26 18:13:28.005389] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:09:34.708 [2024-11-26 18:13:28.005586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58622 ] 00:09:34.968 [2024-11-26 18:13:28.185212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.227 [2024-11-26 18:13:28.323971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.227 [2024-11-26 18:13:28.324363] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:35.227 [2024-11-26 18:13:28.324503] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:35.227 [2024-11-26 18:13:28.324558] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58598 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58598 ']' 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58598 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58598 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58598' 00:09:35.487 killing process with pid 58598 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58598 00:09:35.487 18:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58598 00:09:38.792 ************************************ 00:09:38.792 END TEST exit_on_failed_rpc_init 00:09:38.792 ************************************ 00:09:38.792 00:09:38.792 real 0m5.008s 00:09:38.792 user 0m5.440s 00:09:38.792 sys 0m0.596s 00:09:38.792 18:13:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.792 18:13:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:38.792 18:13:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:38.792 ************************************ 00:09:38.792 END TEST skip_rpc 00:09:38.792 ************************************ 00:09:38.792 00:09:38.792 real 0m25.300s 00:09:38.792 user 0m24.333s 00:09:38.792 sys 0m2.215s 00:09:38.792 18:13:31 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.792 18:13:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.792 18:13:31 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:38.792 18:13:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.792 18:13:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.792 18:13:31 -- common/autotest_common.sh@10 -- # set +x 00:09:38.792 ************************************ 00:09:38.792 START TEST rpc_client 00:09:38.792 ************************************ 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:38.792 * Looking for test storage... 00:09:38.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.792 18:13:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.792 --rc genhtml_branch_coverage=1 00:09:38.792 --rc genhtml_function_coverage=1 00:09:38.792 --rc genhtml_legend=1 00:09:38.792 --rc geninfo_all_blocks=1 00:09:38.792 --rc geninfo_unexecuted_blocks=1 00:09:38.792 00:09:38.792 ' 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.792 --rc genhtml_branch_coverage=1 00:09:38.792 --rc genhtml_function_coverage=1 00:09:38.792 --rc genhtml_legend=1 00:09:38.792 --rc geninfo_all_blocks=1 00:09:38.792 --rc geninfo_unexecuted_blocks=1 00:09:38.792 00:09:38.792 ' 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.792 --rc genhtml_branch_coverage=1 00:09:38.792 --rc genhtml_function_coverage=1 00:09:38.792 --rc genhtml_legend=1 00:09:38.792 --rc geninfo_all_blocks=1 00:09:38.792 --rc geninfo_unexecuted_blocks=1 00:09:38.792 00:09:38.792 ' 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.792 --rc genhtml_branch_coverage=1 00:09:38.792 --rc genhtml_function_coverage=1 00:09:38.792 --rc genhtml_legend=1 00:09:38.792 --rc geninfo_all_blocks=1 00:09:38.792 --rc geninfo_unexecuted_blocks=1 00:09:38.792 00:09:38.792 ' 00:09:38.792 18:13:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:38.792 OK 00:09:38.792 18:13:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:38.792 00:09:38.792 real 0m0.330s 00:09:38.792 user 0m0.183s 00:09:38.792 sys 0m0.162s 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.792 18:13:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:38.792 ************************************ 00:09:38.792 END TEST rpc_client 00:09:38.792 ************************************ 00:09:38.792 18:13:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:38.792 18:13:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.792 18:13:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.792 18:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:38.792 ************************************ 00:09:38.792 START TEST json_config 00:09:38.792 ************************************ 00:09:38.792 18:13:32 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:38.792 18:13:32 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.792 18:13:32 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.793 18:13:32 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.074 18:13:32 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.074 18:13:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.074 18:13:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.074 18:13:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.074 18:13:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.074 18:13:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.074 18:13:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.074 18:13:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.074 18:13:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.074 18:13:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.074 18:13:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.074 18:13:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.074 18:13:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:39.074 18:13:32 json_config -- scripts/common.sh@345 -- # : 1 00:09:39.074 18:13:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.074 18:13:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.074 18:13:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:39.074 18:13:32 json_config -- scripts/common.sh@353 -- # local d=1 00:09:39.074 18:13:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.074 18:13:32 json_config -- scripts/common.sh@355 -- # echo 1 00:09:39.074 18:13:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.074 18:13:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:39.074 18:13:32 json_config -- scripts/common.sh@353 -- # local d=2 00:09:39.074 18:13:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.074 18:13:32 json_config -- scripts/common.sh@355 -- # echo 2 00:09:39.074 18:13:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.074 18:13:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.074 18:13:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.074 18:13:32 json_config -- scripts/common.sh@368 -- # return 0 00:09:39.074 18:13:32 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.074 18:13:32 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.074 --rc genhtml_branch_coverage=1 00:09:39.074 --rc genhtml_function_coverage=1 00:09:39.074 --rc genhtml_legend=1 00:09:39.074 --rc geninfo_all_blocks=1 00:09:39.074 --rc geninfo_unexecuted_blocks=1 00:09:39.074 00:09:39.074 ' 00:09:39.074 18:13:32 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.074 --rc genhtml_branch_coverage=1 00:09:39.074 --rc genhtml_function_coverage=1 00:09:39.074 --rc genhtml_legend=1 00:09:39.074 --rc geninfo_all_blocks=1 00:09:39.074 --rc geninfo_unexecuted_blocks=1 00:09:39.074 00:09:39.074 ' 00:09:39.074 18:13:32 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.074 --rc genhtml_branch_coverage=1 00:09:39.074 --rc genhtml_function_coverage=1 00:09:39.074 --rc genhtml_legend=1 00:09:39.074 --rc geninfo_all_blocks=1 00:09:39.074 --rc geninfo_unexecuted_blocks=1 00:09:39.074 00:09:39.074 ' 00:09:39.074 18:13:32 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.074 --rc genhtml_branch_coverage=1 00:09:39.074 --rc genhtml_function_coverage=1 00:09:39.074 --rc genhtml_legend=1 00:09:39.074 --rc geninfo_all_blocks=1 00:09:39.074 --rc geninfo_unexecuted_blocks=1 00:09:39.074 00:09:39.074 ' 00:09:39.074 18:13:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:57a1b531-b449-4b85-a403-bab0c2dbdf9d 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=57a1b531-b449-4b85-a403-bab0c2dbdf9d 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.074 18:13:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.074 18:13:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.074 18:13:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.074 18:13:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.074 18:13:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.074 18:13:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.074 18:13:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.074 18:13:32 json_config -- paths/export.sh@5 -- # export PATH 00:09:39.074 18:13:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@51 -- # : 0 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.074 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.074 18:13:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.074 18:13:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:39.074 18:13:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:39.074 18:13:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:39.074 18:13:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:39.074 18:13:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:39.074 18:13:32 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:39.075 WARNING: No tests are enabled so not running JSON configuration tests 00:09:39.075 18:13:32 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:39.075 00:09:39.075 real 0m0.242s 00:09:39.075 user 0m0.154s 00:09:39.075 sys 0m0.085s 00:09:39.075 ************************************ 00:09:39.075 END TEST json_config 00:09:39.075 ************************************ 00:09:39.075 18:13:32 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.075 18:13:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:39.075 18:13:32 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:39.075 18:13:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.075 18:13:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.075 18:13:32 -- common/autotest_common.sh@10 -- # set +x 00:09:39.075 ************************************ 00:09:39.075 START TEST json_config_extra_key 00:09:39.075 ************************************ 00:09:39.075 18:13:32 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:39.335 18:13:32 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.335 18:13:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.335 18:13:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.335 18:13:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:39.335 18:13:32 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.335 18:13:32 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.335 --rc genhtml_branch_coverage=1 00:09:39.335 --rc genhtml_function_coverage=1 00:09:39.335 --rc genhtml_legend=1 00:09:39.335 --rc geninfo_all_blocks=1 00:09:39.335 --rc geninfo_unexecuted_blocks=1 00:09:39.335 00:09:39.335 ' 00:09:39.335 18:13:32 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.335 --rc genhtml_branch_coverage=1 00:09:39.335 --rc genhtml_function_coverage=1 00:09:39.335 --rc genhtml_legend=1 00:09:39.335 --rc geninfo_all_blocks=1 00:09:39.335 --rc geninfo_unexecuted_blocks=1 00:09:39.335 00:09:39.335 ' 00:09:39.335 18:13:32 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.335 --rc genhtml_branch_coverage=1 00:09:39.335 --rc genhtml_function_coverage=1 00:09:39.335 --rc genhtml_legend=1 00:09:39.335 --rc geninfo_all_blocks=1 00:09:39.335 --rc geninfo_unexecuted_blocks=1 00:09:39.335 00:09:39.335 ' 00:09:39.335 18:13:32 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.335 --rc genhtml_branch_coverage=1 00:09:39.335 --rc genhtml_function_coverage=1 00:09:39.335 --rc genhtml_legend=1 00:09:39.335 --rc geninfo_all_blocks=1 00:09:39.335 --rc geninfo_unexecuted_blocks=1 00:09:39.335 00:09:39.335 ' 00:09:39.335 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:57a1b531-b449-4b85-a403-bab0c2dbdf9d 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=57a1b531-b449-4b85-a403-bab0c2dbdf9d 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.335 18:13:32 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.335 18:13:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.335 18:13:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.335 18:13:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.335 18:13:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:39.335 18:13:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.335 18:13:32 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.336 18:13:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.336 18:13:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.336 18:13:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.336 18:13:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.336 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.336 18:13:32 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.336 18:13:32 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.336 18:13:32 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:39.336 INFO: launching applications... 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:39.336 18:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58837 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:39.336 Waiting for target to run... 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58837 /var/tmp/spdk_tgt.sock 00:09:39.336 18:13:32 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58837 ']' 00:09:39.336 18:13:32 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:39.336 18:13:32 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:39.336 18:13:32 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:39.336 18:13:32 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:39.336 18:13:32 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.336 18:13:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:39.595 [2024-11-26 18:13:32.696372] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:09:39.595 [2024-11-26 18:13:32.696666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58837 ] 00:09:39.854 [2024-11-26 18:13:33.131772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.112 [2024-11-26 18:13:33.257669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.100 18:13:34 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.100 18:13:34 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:41.100 18:13:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:41.100 00:09:41.100 18:13:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:41.100 INFO: shutting down applications... 00:09:41.100 18:13:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:41.100 18:13:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:41.100 18:13:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:41.100 18:13:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58837 ]] 00:09:41.100 18:13:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58837 00:09:41.100 18:13:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:41.100 18:13:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:41.100 18:13:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58837 00:09:41.100 18:13:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:41.359 18:13:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:41.359 18:13:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:41.359 18:13:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58837 00:09:41.359 18:13:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:41.927 18:13:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:41.927 18:13:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:41.927 18:13:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58837 00:09:41.927 18:13:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:42.497 18:13:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:42.497 18:13:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:42.497 18:13:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58837 00:09:42.497 18:13:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:43.066 18:13:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:43.066 18:13:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:43.066 18:13:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58837 00:09:43.066 18:13:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:43.634 18:13:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:43.634 18:13:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:43.634 18:13:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58837 00:09:43.634 18:13:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:43.894 18:13:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:43.894 18:13:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:43.894 18:13:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58837 00:09:43.894 18:13:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:44.461 18:13:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:44.461 18:13:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:44.461 18:13:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58837 00:09:44.461 18:13:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:44.461 18:13:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:44.461 18:13:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:44.461 SPDK target shutdown done 00:09:44.461 18:13:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:44.461 18:13:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:44.461 Success 00:09:44.461 ************************************ 00:09:44.461 END TEST json_config_extra_key 00:09:44.461 ************************************ 00:09:44.461 00:09:44.461 real 0m5.361s 00:09:44.461 user 0m4.701s 00:09:44.461 sys 0m0.653s 00:09:44.461 18:13:37 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.461 18:13:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:44.461 18:13:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:44.462 18:13:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.462 18:13:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.462 18:13:37 -- common/autotest_common.sh@10 -- # set +x 00:09:44.462 ************************************ 00:09:44.462 START TEST alias_rpc 00:09:44.462 ************************************ 00:09:44.462 18:13:37 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:44.720 * Looking for test storage... 00:09:44.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:44.720 18:13:37 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.720 18:13:37 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.720 18:13:37 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.720 18:13:37 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:44.720 18:13:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.721 18:13:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:44.721 18:13:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.721 18:13:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.721 18:13:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.721 18:13:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.721 --rc genhtml_branch_coverage=1 00:09:44.721 --rc genhtml_function_coverage=1 00:09:44.721 --rc genhtml_legend=1 00:09:44.721 --rc geninfo_all_blocks=1 00:09:44.721 --rc geninfo_unexecuted_blocks=1 00:09:44.721 00:09:44.721 ' 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.721 --rc genhtml_branch_coverage=1 00:09:44.721 --rc genhtml_function_coverage=1 00:09:44.721 --rc genhtml_legend=1 00:09:44.721 --rc geninfo_all_blocks=1 00:09:44.721 --rc geninfo_unexecuted_blocks=1 00:09:44.721 00:09:44.721 ' 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.721 --rc genhtml_branch_coverage=1 00:09:44.721 --rc genhtml_function_coverage=1 00:09:44.721 --rc genhtml_legend=1 00:09:44.721 --rc geninfo_all_blocks=1 00:09:44.721 --rc geninfo_unexecuted_blocks=1 00:09:44.721 00:09:44.721 ' 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.721 --rc genhtml_branch_coverage=1 00:09:44.721 --rc genhtml_function_coverage=1 00:09:44.721 --rc genhtml_legend=1 00:09:44.721 --rc geninfo_all_blocks=1 00:09:44.721 --rc geninfo_unexecuted_blocks=1 00:09:44.721 00:09:44.721 ' 00:09:44.721 18:13:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:44.721 18:13:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58961 00:09:44.721 18:13:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:44.721 18:13:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58961 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58961 ']' 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.721 18:13:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.981 [2024-11-26 18:13:38.106866] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:09:44.981 [2024-11-26 18:13:38.107103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58961 ] 00:09:44.981 [2024-11-26 18:13:38.291605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.239 [2024-11-26 18:13:38.429541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.175 18:13:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.175 18:13:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:46.175 18:13:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:46.434 18:13:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58961 00:09:46.434 18:13:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58961 ']' 00:09:46.434 18:13:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58961 00:09:46.435 18:13:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:46.435 18:13:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.435 18:13:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58961 00:09:46.435 18:13:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.435 18:13:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.435 18:13:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58961' 00:09:46.435 killing process with pid 58961 00:09:46.435 18:13:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 58961 00:09:46.435 18:13:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 58961 00:09:49.745 00:09:49.745 real 0m4.903s 00:09:49.745 user 0m4.996s 00:09:49.745 sys 0m0.641s 00:09:49.745 18:13:42 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.745 18:13:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.745 ************************************ 00:09:49.745 END TEST alias_rpc 00:09:49.745 ************************************ 00:09:49.745 18:13:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:49.745 18:13:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:49.745 18:13:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.745 18:13:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.745 18:13:42 -- common/autotest_common.sh@10 -- # set +x 00:09:49.745 ************************************ 00:09:49.745 START TEST spdkcli_tcp 00:09:49.745 ************************************ 00:09:49.745 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:49.745 * Looking for test storage... 00:09:49.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:49.745 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:49.745 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:49.745 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:49.745 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.745 18:13:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.746 18:13:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:49.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.746 --rc genhtml_branch_coverage=1 00:09:49.746 --rc genhtml_function_coverage=1 00:09:49.746 --rc genhtml_legend=1 00:09:49.746 --rc geninfo_all_blocks=1 00:09:49.746 --rc geninfo_unexecuted_blocks=1 00:09:49.746 00:09:49.746 ' 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:49.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.746 --rc genhtml_branch_coverage=1 00:09:49.746 --rc genhtml_function_coverage=1 00:09:49.746 --rc genhtml_legend=1 00:09:49.746 --rc geninfo_all_blocks=1 00:09:49.746 --rc geninfo_unexecuted_blocks=1 00:09:49.746 00:09:49.746 ' 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:49.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.746 --rc genhtml_branch_coverage=1 00:09:49.746 --rc genhtml_function_coverage=1 00:09:49.746 --rc genhtml_legend=1 00:09:49.746 --rc geninfo_all_blocks=1 00:09:49.746 --rc geninfo_unexecuted_blocks=1 00:09:49.746 00:09:49.746 ' 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:49.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.746 --rc genhtml_branch_coverage=1 00:09:49.746 --rc genhtml_function_coverage=1 00:09:49.746 --rc genhtml_legend=1 00:09:49.746 --rc geninfo_all_blocks=1 00:09:49.746 --rc geninfo_unexecuted_blocks=1 00:09:49.746 00:09:49.746 ' 00:09:49.746 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:49.746 18:13:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:49.746 18:13:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:49.746 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:49.746 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:49.746 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:49.746 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.746 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59072 00:09:49.746 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:49.746 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59072 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59072 ']' 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.746 18:13:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.746 [2024-11-26 18:13:43.076577] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:09:49.746 [2024-11-26 18:13:43.076814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59072 ] 00:09:50.007 [2024-11-26 18:13:43.257521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:50.266 [2024-11-26 18:13:43.400357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.266 [2024-11-26 18:13:43.400385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.200 18:13:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.200 18:13:44 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:51.200 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59096 00:09:51.200 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:51.200 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:51.459 [ 00:09:51.459 "bdev_malloc_delete", 00:09:51.459 "bdev_malloc_create", 00:09:51.459 "bdev_null_resize", 00:09:51.459 "bdev_null_delete", 00:09:51.459 "bdev_null_create", 00:09:51.459 "bdev_nvme_cuse_unregister", 00:09:51.459 "bdev_nvme_cuse_register", 00:09:51.459 "bdev_opal_new_user", 00:09:51.459 "bdev_opal_set_lock_state", 00:09:51.459 "bdev_opal_delete", 00:09:51.459 "bdev_opal_get_info", 00:09:51.460 "bdev_opal_create", 00:09:51.460 "bdev_nvme_opal_revert", 00:09:51.460 "bdev_nvme_opal_init", 00:09:51.460 "bdev_nvme_send_cmd", 00:09:51.460 "bdev_nvme_set_keys", 00:09:51.460 "bdev_nvme_get_path_iostat", 00:09:51.460 "bdev_nvme_get_mdns_discovery_info", 00:09:51.460 "bdev_nvme_stop_mdns_discovery", 00:09:51.460 "bdev_nvme_start_mdns_discovery", 00:09:51.460 "bdev_nvme_set_multipath_policy", 00:09:51.460 "bdev_nvme_set_preferred_path", 00:09:51.460 "bdev_nvme_get_io_paths", 00:09:51.460 "bdev_nvme_remove_error_injection", 00:09:51.460 "bdev_nvme_add_error_injection", 00:09:51.460 "bdev_nvme_get_discovery_info", 00:09:51.460 "bdev_nvme_stop_discovery", 00:09:51.460 "bdev_nvme_start_discovery", 00:09:51.460 "bdev_nvme_get_controller_health_info", 00:09:51.460 "bdev_nvme_disable_controller", 00:09:51.460 "bdev_nvme_enable_controller", 00:09:51.460 "bdev_nvme_reset_controller", 00:09:51.460 "bdev_nvme_get_transport_statistics", 00:09:51.460 "bdev_nvme_apply_firmware", 00:09:51.460 "bdev_nvme_detach_controller", 00:09:51.460 "bdev_nvme_get_controllers", 00:09:51.460 "bdev_nvme_attach_controller", 00:09:51.460 "bdev_nvme_set_hotplug", 00:09:51.460 "bdev_nvme_set_options", 00:09:51.460 "bdev_passthru_delete", 00:09:51.460 "bdev_passthru_create", 00:09:51.460 "bdev_lvol_set_parent_bdev", 00:09:51.460 "bdev_lvol_set_parent", 00:09:51.460 "bdev_lvol_check_shallow_copy", 00:09:51.460 "bdev_lvol_start_shallow_copy", 00:09:51.460 "bdev_lvol_grow_lvstore", 00:09:51.460 "bdev_lvol_get_lvols", 00:09:51.460 "bdev_lvol_get_lvstores", 00:09:51.460 "bdev_lvol_delete", 00:09:51.460 "bdev_lvol_set_read_only", 00:09:51.460 "bdev_lvol_resize", 00:09:51.460 "bdev_lvol_decouple_parent", 00:09:51.460 "bdev_lvol_inflate", 00:09:51.460 "bdev_lvol_rename", 00:09:51.460 "bdev_lvol_clone_bdev", 00:09:51.460 "bdev_lvol_clone", 00:09:51.460 "bdev_lvol_snapshot", 00:09:51.460 "bdev_lvol_create", 00:09:51.460 "bdev_lvol_delete_lvstore", 00:09:51.460 "bdev_lvol_rename_lvstore", 00:09:51.460 "bdev_lvol_create_lvstore", 00:09:51.460 "bdev_raid_set_options", 00:09:51.460 "bdev_raid_remove_base_bdev", 00:09:51.460 "bdev_raid_add_base_bdev", 00:09:51.460 "bdev_raid_delete", 00:09:51.460 "bdev_raid_create", 00:09:51.460 "bdev_raid_get_bdevs", 00:09:51.460 "bdev_error_inject_error", 00:09:51.460 "bdev_error_delete", 00:09:51.460 "bdev_error_create", 00:09:51.460 "bdev_split_delete", 00:09:51.460 "bdev_split_create", 00:09:51.460 "bdev_delay_delete", 00:09:51.460 "bdev_delay_create", 00:09:51.460 "bdev_delay_update_latency", 00:09:51.460 "bdev_zone_block_delete", 00:09:51.460 "bdev_zone_block_create", 00:09:51.460 "blobfs_create", 00:09:51.460 "blobfs_detect", 00:09:51.460 "blobfs_set_cache_size", 00:09:51.460 "bdev_xnvme_delete", 00:09:51.460 "bdev_xnvme_create", 00:09:51.460 "bdev_aio_delete", 00:09:51.460 "bdev_aio_rescan", 00:09:51.460 "bdev_aio_create", 00:09:51.460 "bdev_ftl_set_property", 00:09:51.460 "bdev_ftl_get_properties", 00:09:51.460 "bdev_ftl_get_stats", 00:09:51.460 "bdev_ftl_unmap", 00:09:51.460 "bdev_ftl_unload", 00:09:51.460 "bdev_ftl_delete", 00:09:51.460 "bdev_ftl_load", 00:09:51.460 "bdev_ftl_create", 00:09:51.460 "bdev_virtio_attach_controller", 00:09:51.460 "bdev_virtio_scsi_get_devices", 00:09:51.460 "bdev_virtio_detach_controller", 00:09:51.460 "bdev_virtio_blk_set_hotplug", 00:09:51.460 "bdev_iscsi_delete", 00:09:51.460 "bdev_iscsi_create", 00:09:51.460 "bdev_iscsi_set_options", 00:09:51.460 "accel_error_inject_error", 00:09:51.460 "ioat_scan_accel_module", 00:09:51.460 "dsa_scan_accel_module", 00:09:51.460 "iaa_scan_accel_module", 00:09:51.460 "keyring_file_remove_key", 00:09:51.460 "keyring_file_add_key", 00:09:51.460 "keyring_linux_set_options", 00:09:51.460 "fsdev_aio_delete", 00:09:51.460 "fsdev_aio_create", 00:09:51.460 "iscsi_get_histogram", 00:09:51.460 "iscsi_enable_histogram", 00:09:51.460 "iscsi_set_options", 00:09:51.460 "iscsi_get_auth_groups", 00:09:51.460 "iscsi_auth_group_remove_secret", 00:09:51.460 "iscsi_auth_group_add_secret", 00:09:51.460 "iscsi_delete_auth_group", 00:09:51.460 "iscsi_create_auth_group", 00:09:51.460 "iscsi_set_discovery_auth", 00:09:51.460 "iscsi_get_options", 00:09:51.460 "iscsi_target_node_request_logout", 00:09:51.460 "iscsi_target_node_set_redirect", 00:09:51.460 "iscsi_target_node_set_auth", 00:09:51.460 "iscsi_target_node_add_lun", 00:09:51.460 "iscsi_get_stats", 00:09:51.460 "iscsi_get_connections", 00:09:51.460 "iscsi_portal_group_set_auth", 00:09:51.460 "iscsi_start_portal_group", 00:09:51.460 "iscsi_delete_portal_group", 00:09:51.460 "iscsi_create_portal_group", 00:09:51.460 "iscsi_get_portal_groups", 00:09:51.460 "iscsi_delete_target_node", 00:09:51.460 "iscsi_target_node_remove_pg_ig_maps", 00:09:51.460 "iscsi_target_node_add_pg_ig_maps", 00:09:51.460 "iscsi_create_target_node", 00:09:51.460 "iscsi_get_target_nodes", 00:09:51.460 "iscsi_delete_initiator_group", 00:09:51.460 "iscsi_initiator_group_remove_initiators", 00:09:51.460 "iscsi_initiator_group_add_initiators", 00:09:51.460 "iscsi_create_initiator_group", 00:09:51.460 "iscsi_get_initiator_groups", 00:09:51.460 "nvmf_set_crdt", 00:09:51.460 "nvmf_set_config", 00:09:51.460 "nvmf_set_max_subsystems", 00:09:51.460 "nvmf_stop_mdns_prr", 00:09:51.460 "nvmf_publish_mdns_prr", 00:09:51.460 "nvmf_subsystem_get_listeners", 00:09:51.460 "nvmf_subsystem_get_qpairs", 00:09:51.460 "nvmf_subsystem_get_controllers", 00:09:51.460 "nvmf_get_stats", 00:09:51.460 "nvmf_get_transports", 00:09:51.460 "nvmf_create_transport", 00:09:51.460 "nvmf_get_targets", 00:09:51.460 "nvmf_delete_target", 00:09:51.460 "nvmf_create_target", 00:09:51.460 "nvmf_subsystem_allow_any_host", 00:09:51.460 "nvmf_subsystem_set_keys", 00:09:51.460 "nvmf_subsystem_remove_host", 00:09:51.460 "nvmf_subsystem_add_host", 00:09:51.460 "nvmf_ns_remove_host", 00:09:51.460 "nvmf_ns_add_host", 00:09:51.460 "nvmf_subsystem_remove_ns", 00:09:51.460 "nvmf_subsystem_set_ns_ana_group", 00:09:51.460 "nvmf_subsystem_add_ns", 00:09:51.460 "nvmf_subsystem_listener_set_ana_state", 00:09:51.460 "nvmf_discovery_get_referrals", 00:09:51.460 "nvmf_discovery_remove_referral", 00:09:51.460 "nvmf_discovery_add_referral", 00:09:51.460 "nvmf_subsystem_remove_listener", 00:09:51.460 "nvmf_subsystem_add_listener", 00:09:51.460 "nvmf_delete_subsystem", 00:09:51.460 "nvmf_create_subsystem", 00:09:51.460 "nvmf_get_subsystems", 00:09:51.460 "env_dpdk_get_mem_stats", 00:09:51.460 "nbd_get_disks", 00:09:51.460 "nbd_stop_disk", 00:09:51.460 "nbd_start_disk", 00:09:51.460 "ublk_recover_disk", 00:09:51.460 "ublk_get_disks", 00:09:51.460 "ublk_stop_disk", 00:09:51.460 "ublk_start_disk", 00:09:51.460 "ublk_destroy_target", 00:09:51.460 "ublk_create_target", 00:09:51.460 "virtio_blk_create_transport", 00:09:51.460 "virtio_blk_get_transports", 00:09:51.460 "vhost_controller_set_coalescing", 00:09:51.460 "vhost_get_controllers", 00:09:51.460 "vhost_delete_controller", 00:09:51.460 "vhost_create_blk_controller", 00:09:51.460 "vhost_scsi_controller_remove_target", 00:09:51.460 "vhost_scsi_controller_add_target", 00:09:51.460 "vhost_start_scsi_controller", 00:09:51.460 "vhost_create_scsi_controller", 00:09:51.460 "thread_set_cpumask", 00:09:51.460 "scheduler_set_options", 00:09:51.460 "framework_get_governor", 00:09:51.460 "framework_get_scheduler", 00:09:51.460 "framework_set_scheduler", 00:09:51.460 "framework_get_reactors", 00:09:51.460 "thread_get_io_channels", 00:09:51.460 "thread_get_pollers", 00:09:51.460 "thread_get_stats", 00:09:51.460 "framework_monitor_context_switch", 00:09:51.460 "spdk_kill_instance", 00:09:51.460 "log_enable_timestamps", 00:09:51.460 "log_get_flags", 00:09:51.460 "log_clear_flag", 00:09:51.460 "log_set_flag", 00:09:51.460 "log_get_level", 00:09:51.460 "log_set_level", 00:09:51.460 "log_get_print_level", 00:09:51.460 "log_set_print_level", 00:09:51.460 "framework_enable_cpumask_locks", 00:09:51.460 "framework_disable_cpumask_locks", 00:09:51.460 "framework_wait_init", 00:09:51.460 "framework_start_init", 00:09:51.460 "scsi_get_devices", 00:09:51.460 "bdev_get_histogram", 00:09:51.460 "bdev_enable_histogram", 00:09:51.460 "bdev_set_qos_limit", 00:09:51.460 "bdev_set_qd_sampling_period", 00:09:51.460 "bdev_get_bdevs", 00:09:51.460 "bdev_reset_iostat", 00:09:51.460 "bdev_get_iostat", 00:09:51.460 "bdev_examine", 00:09:51.460 "bdev_wait_for_examine", 00:09:51.460 "bdev_set_options", 00:09:51.460 "accel_get_stats", 00:09:51.460 "accel_set_options", 00:09:51.460 "accel_set_driver", 00:09:51.460 "accel_crypto_key_destroy", 00:09:51.460 "accel_crypto_keys_get", 00:09:51.460 "accel_crypto_key_create", 00:09:51.460 "accel_assign_opc", 00:09:51.460 "accel_get_module_info", 00:09:51.460 "accel_get_opc_assignments", 00:09:51.460 "vmd_rescan", 00:09:51.460 "vmd_remove_device", 00:09:51.460 "vmd_enable", 00:09:51.460 "sock_get_default_impl", 00:09:51.460 "sock_set_default_impl", 00:09:51.460 "sock_impl_set_options", 00:09:51.460 "sock_impl_get_options", 00:09:51.460 "iobuf_get_stats", 00:09:51.460 "iobuf_set_options", 00:09:51.460 "keyring_get_keys", 00:09:51.460 "framework_get_pci_devices", 00:09:51.460 "framework_get_config", 00:09:51.460 "framework_get_subsystems", 00:09:51.461 "fsdev_set_opts", 00:09:51.461 "fsdev_get_opts", 00:09:51.461 "trace_get_info", 00:09:51.461 "trace_get_tpoint_group_mask", 00:09:51.461 "trace_disable_tpoint_group", 00:09:51.461 "trace_enable_tpoint_group", 00:09:51.461 "trace_clear_tpoint_mask", 00:09:51.461 "trace_set_tpoint_mask", 00:09:51.461 "notify_get_notifications", 00:09:51.461 "notify_get_types", 00:09:51.461 "spdk_get_version", 00:09:51.461 "rpc_get_methods" 00:09:51.461 ] 00:09:51.461 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:51.461 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:51.461 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59072 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59072 ']' 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59072 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59072 00:09:51.461 killing process with pid 59072 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59072' 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59072 00:09:51.461 18:13:44 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59072 00:09:54.766 ************************************ 00:09:54.766 END TEST spdkcli_tcp 00:09:54.766 ************************************ 00:09:54.766 00:09:54.766 real 0m4.961s 00:09:54.766 user 0m9.029s 00:09:54.766 sys 0m0.669s 00:09:54.766 18:13:47 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.766 18:13:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:54.766 18:13:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:54.766 18:13:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.766 18:13:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.766 18:13:47 -- common/autotest_common.sh@10 -- # set +x 00:09:54.766 ************************************ 00:09:54.766 START TEST dpdk_mem_utility 00:09:54.766 ************************************ 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:54.766 * Looking for test storage... 00:09:54.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.766 18:13:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.766 --rc genhtml_branch_coverage=1 00:09:54.766 --rc genhtml_function_coverage=1 00:09:54.766 --rc genhtml_legend=1 00:09:54.766 --rc geninfo_all_blocks=1 00:09:54.766 --rc geninfo_unexecuted_blocks=1 00:09:54.766 00:09:54.766 ' 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.766 --rc genhtml_branch_coverage=1 00:09:54.766 --rc genhtml_function_coverage=1 00:09:54.766 --rc genhtml_legend=1 00:09:54.766 --rc geninfo_all_blocks=1 00:09:54.766 --rc geninfo_unexecuted_blocks=1 00:09:54.766 00:09:54.766 ' 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.766 --rc genhtml_branch_coverage=1 00:09:54.766 --rc genhtml_function_coverage=1 00:09:54.766 --rc genhtml_legend=1 00:09:54.766 --rc geninfo_all_blocks=1 00:09:54.766 --rc geninfo_unexecuted_blocks=1 00:09:54.766 00:09:54.766 ' 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:54.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.766 --rc genhtml_branch_coverage=1 00:09:54.766 --rc genhtml_function_coverage=1 00:09:54.766 --rc genhtml_legend=1 00:09:54.766 --rc geninfo_all_blocks=1 00:09:54.766 --rc geninfo_unexecuted_blocks=1 00:09:54.766 00:09:54.766 ' 00:09:54.766 18:13:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:54.766 18:13:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59201 00:09:54.766 18:13:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:54.766 18:13:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59201 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59201 ']' 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.766 18:13:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:55.026 [2024-11-26 18:13:48.132775] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:09:55.026 [2024-11-26 18:13:48.133062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59201 ] 00:09:55.026 [2024-11-26 18:13:48.318530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.285 [2024-11-26 18:13:48.454091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.223 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.223 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:56.223 18:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:56.223 18:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:56.223 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.223 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:56.223 { 00:09:56.223 "filename": "/tmp/spdk_mem_dump.txt" 00:09:56.223 } 00:09:56.223 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.223 18:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:56.223 DPDK memory size 824.000000 MiB in 1 heap(s) 00:09:56.223 1 heaps totaling size 824.000000 MiB 00:09:56.223 size: 824.000000 MiB heap id: 0 00:09:56.223 end heaps---------- 00:09:56.223 9 mempools totaling size 603.782043 MiB 00:09:56.223 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:56.223 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:56.223 size: 100.555481 MiB name: bdev_io_59201 00:09:56.223 size: 50.003479 MiB name: msgpool_59201 00:09:56.223 size: 36.509338 MiB name: fsdev_io_59201 00:09:56.223 size: 21.763794 MiB name: PDU_Pool 00:09:56.223 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:56.223 size: 4.133484 MiB name: evtpool_59201 00:09:56.223 size: 0.026123 MiB name: Session_Pool 00:09:56.223 end mempools------- 00:09:56.223 6 memzones totaling size 4.142822 MiB 00:09:56.223 size: 1.000366 MiB name: RG_ring_0_59201 00:09:56.223 size: 1.000366 MiB name: RG_ring_1_59201 00:09:56.223 size: 1.000366 MiB name: RG_ring_4_59201 00:09:56.223 size: 1.000366 MiB name: RG_ring_5_59201 00:09:56.223 size: 0.125366 MiB name: RG_ring_2_59201 00:09:56.223 size: 0.015991 MiB name: RG_ring_3_59201 00:09:56.223 end memzones------- 00:09:56.223 18:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:56.484 heap id: 0 total size: 824.000000 MiB number of busy elements: 318 number of free elements: 18 00:09:56.484 list of free elements. size: 16.780640 MiB 00:09:56.484 element at address: 0x200006400000 with size: 1.995972 MiB 00:09:56.484 element at address: 0x20000a600000 with size: 1.995972 MiB 00:09:56.484 element at address: 0x200003e00000 with size: 1.991028 MiB 00:09:56.484 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:56.484 element at address: 0x200019900040 with size: 0.999939 MiB 00:09:56.484 element at address: 0x200019a00000 with size: 0.999084 MiB 00:09:56.484 element at address: 0x200032600000 with size: 0.994324 MiB 00:09:56.484 element at address: 0x200000400000 with size: 0.992004 MiB 00:09:56.484 element at address: 0x200019200000 with size: 0.959656 MiB 00:09:56.484 element at address: 0x200019d00040 with size: 0.936401 MiB 00:09:56.484 element at address: 0x200000200000 with size: 0.716980 MiB 00:09:56.484 element at address: 0x20001b400000 with size: 0.561951 MiB 00:09:56.484 element at address: 0x200000c00000 with size: 0.489197 MiB 00:09:56.484 element at address: 0x200019600000 with size: 0.487976 MiB 00:09:56.484 element at address: 0x200019e00000 with size: 0.485413 MiB 00:09:56.484 element at address: 0x200012c00000 with size: 0.433472 MiB 00:09:56.484 element at address: 0x200028800000 with size: 0.390442 MiB 00:09:56.484 element at address: 0x200000800000 with size: 0.350891 MiB 00:09:56.484 list of standard malloc elements. size: 199.288452 MiB 00:09:56.484 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:09:56.484 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:09:56.484 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:56.484 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:56.484 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:09:56.484 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:56.484 element at address: 0x200019deff40 with size: 0.062683 MiB 00:09:56.484 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:56.484 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:09:56.484 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:09:56.484 element at address: 0x200012bff040 with size: 0.000305 MiB 00:09:56.484 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:09:56.484 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:09:56.485 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200000cff000 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bff180 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bff280 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bff380 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bff480 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bff580 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bff680 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bff780 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bff880 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bff980 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200019affc40 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:09:56.485 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:09:56.486 element at address: 0x200028863f40 with size: 0.000244 MiB 00:09:56.486 element at address: 0x200028864040 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886af80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886b080 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886b180 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886b280 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886b380 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886b480 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886b580 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886b680 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886b780 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886b880 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886b980 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886be80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886c080 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886c180 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886c280 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886c380 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886c480 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886c580 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886c680 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886c780 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886c880 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886c980 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886d080 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886d180 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886d280 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886d380 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886d480 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886d580 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886d680 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886d780 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886d880 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886d980 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886da80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886db80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886de80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886df80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886e080 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886e180 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886e280 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886e380 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886e480 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886e580 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886e680 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886e780 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886e880 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886e980 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886f080 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886f180 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886f280 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886f380 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886f480 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886f580 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886f680 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886f780 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886f880 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886f980 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:09:56.486 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:09:56.486 list of memzone associated elements. size: 607.930908 MiB 00:09:56.486 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:09:56.486 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:56.486 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:09:56.486 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:56.486 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:09:56.486 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59201_0 00:09:56.486 element at address: 0x200000dff340 with size: 48.003113 MiB 00:09:56.486 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59201_0 00:09:56.486 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:09:56.486 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59201_0 00:09:56.486 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:09:56.486 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:56.486 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:09:56.486 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:56.486 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:09:56.486 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59201_0 00:09:56.486 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:09:56.486 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59201 00:09:56.486 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:56.486 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59201 00:09:56.486 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:09:56.486 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:56.486 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:09:56.486 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:56.486 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:56.486 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:56.486 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:09:56.486 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:56.486 element at address: 0x200000cff100 with size: 1.000549 MiB 00:09:56.486 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59201 00:09:56.486 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:09:56.486 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59201 00:09:56.486 element at address: 0x200019affd40 with size: 1.000549 MiB 00:09:56.486 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59201 00:09:56.486 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:09:56.486 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59201 00:09:56.486 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:09:56.486 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59201 00:09:56.486 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:09:56.487 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59201 00:09:56.487 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:09:56.487 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:56.487 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:09:56.487 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:56.487 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:09:56.487 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:56.487 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:09:56.487 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59201 00:09:56.487 element at address: 0x20000085df80 with size: 0.125549 MiB 00:09:56.487 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59201 00:09:56.487 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:09:56.487 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:56.487 element at address: 0x200028864140 with size: 0.023804 MiB 00:09:56.487 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:56.487 element at address: 0x200000859d40 with size: 0.016174 MiB 00:09:56.487 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59201 00:09:56.487 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:09:56.487 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:56.487 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:09:56.487 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59201 00:09:56.487 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:09:56.487 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59201 00:09:56.487 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:09:56.487 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59201 00:09:56.487 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:09:56.487 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:56.487 18:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:56.487 18:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59201 00:09:56.487 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59201 ']' 00:09:56.487 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59201 00:09:56.487 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:56.487 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.487 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59201 00:09:56.487 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.487 killing process with pid 59201 00:09:56.487 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.487 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59201' 00:09:56.487 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59201 00:09:56.487 18:13:49 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59201 00:09:59.771 00:09:59.771 real 0m4.622s 00:09:59.771 user 0m4.579s 00:09:59.771 sys 0m0.614s 00:09:59.771 18:13:52 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.771 ************************************ 00:09:59.771 END TEST dpdk_mem_utility 00:09:59.771 ************************************ 00:09:59.771 18:13:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:59.771 18:13:52 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:59.771 18:13:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.771 18:13:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.771 18:13:52 -- common/autotest_common.sh@10 -- # set +x 00:09:59.771 ************************************ 00:09:59.771 START TEST event 00:09:59.771 ************************************ 00:09:59.771 18:13:52 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:59.771 * Looking for test storage... 00:09:59.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:59.771 18:13:52 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.771 18:13:52 event -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.771 18:13:52 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.771 18:13:52 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.771 18:13:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.771 18:13:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.771 18:13:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.771 18:13:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.771 18:13:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.771 18:13:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.771 18:13:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.771 18:13:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.771 18:13:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.771 18:13:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.771 18:13:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.771 18:13:52 event -- scripts/common.sh@344 -- # case "$op" in 00:09:59.771 18:13:52 event -- scripts/common.sh@345 -- # : 1 00:09:59.771 18:13:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.771 18:13:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.771 18:13:52 event -- scripts/common.sh@365 -- # decimal 1 00:09:59.771 18:13:52 event -- scripts/common.sh@353 -- # local d=1 00:09:59.771 18:13:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.771 18:13:52 event -- scripts/common.sh@355 -- # echo 1 00:09:59.771 18:13:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.771 18:13:52 event -- scripts/common.sh@366 -- # decimal 2 00:09:59.771 18:13:52 event -- scripts/common.sh@353 -- # local d=2 00:09:59.771 18:13:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.771 18:13:52 event -- scripts/common.sh@355 -- # echo 2 00:09:59.771 18:13:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.771 18:13:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.771 18:13:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.772 18:13:52 event -- scripts/common.sh@368 -- # return 0 00:09:59.772 18:13:52 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.772 18:13:52 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.772 --rc genhtml_branch_coverage=1 00:09:59.772 --rc genhtml_function_coverage=1 00:09:59.772 --rc genhtml_legend=1 00:09:59.772 --rc geninfo_all_blocks=1 00:09:59.772 --rc geninfo_unexecuted_blocks=1 00:09:59.772 00:09:59.772 ' 00:09:59.772 18:13:52 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.772 --rc genhtml_branch_coverage=1 00:09:59.772 --rc genhtml_function_coverage=1 00:09:59.772 --rc genhtml_legend=1 00:09:59.772 --rc geninfo_all_blocks=1 00:09:59.772 --rc geninfo_unexecuted_blocks=1 00:09:59.772 00:09:59.772 ' 00:09:59.772 18:13:52 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.772 --rc genhtml_branch_coverage=1 00:09:59.772 --rc genhtml_function_coverage=1 00:09:59.772 --rc genhtml_legend=1 00:09:59.772 --rc geninfo_all_blocks=1 00:09:59.772 --rc geninfo_unexecuted_blocks=1 00:09:59.772 00:09:59.772 ' 00:09:59.772 18:13:52 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.772 --rc genhtml_branch_coverage=1 00:09:59.772 --rc genhtml_function_coverage=1 00:09:59.772 --rc genhtml_legend=1 00:09:59.772 --rc geninfo_all_blocks=1 00:09:59.772 --rc geninfo_unexecuted_blocks=1 00:09:59.772 00:09:59.772 ' 00:09:59.772 18:13:52 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:59.772 18:13:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:59.772 18:13:52 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:59.772 18:13:52 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:59.772 18:13:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.772 18:13:52 event -- common/autotest_common.sh@10 -- # set +x 00:09:59.772 ************************************ 00:09:59.772 START TEST event_perf 00:09:59.772 ************************************ 00:09:59.772 18:13:52 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:59.772 Running I/O for 1 seconds...[2024-11-26 18:13:52.690285] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:09:59.772 [2024-11-26 18:13:52.690809] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59320 ] 00:09:59.772 [2024-11-26 18:13:52.866838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.772 [2024-11-26 18:13:52.996136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.772 [2024-11-26 18:13:52.996383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.772 [2024-11-26 18:13:52.996352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.772 [2024-11-26 18:13:52.996157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.201 Running I/O for 1 seconds... 00:10:01.201 lcore 0: 180017 00:10:01.201 lcore 1: 180018 00:10:01.201 lcore 2: 180019 00:10:01.201 lcore 3: 180018 00:10:01.201 done. 00:10:01.201 00:10:01.201 real 0m1.618s 00:10:01.201 user 0m4.380s 00:10:01.201 sys 0m0.111s 00:10:01.201 18:13:54 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.201 18:13:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:01.201 ************************************ 00:10:01.201 END TEST event_perf 00:10:01.201 ************************************ 00:10:01.201 18:13:54 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:01.201 18:13:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.201 18:13:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.201 18:13:54 event -- common/autotest_common.sh@10 -- # set +x 00:10:01.201 ************************************ 00:10:01.201 START TEST event_reactor 00:10:01.201 ************************************ 00:10:01.201 18:13:54 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:01.201 [2024-11-26 18:13:54.378052] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:10:01.201 [2024-11-26 18:13:54.378296] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 00:10:01.461 [2024-11-26 18:13:54.559250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.461 [2024-11-26 18:13:54.689108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.838 test_start 00:10:02.838 oneshot 00:10:02.838 tick 100 00:10:02.838 tick 100 00:10:02.838 tick 250 00:10:02.838 tick 100 00:10:02.838 tick 100 00:10:02.838 tick 100 00:10:02.838 tick 250 00:10:02.838 tick 500 00:10:02.838 tick 100 00:10:02.838 tick 100 00:10:02.838 tick 250 00:10:02.838 tick 100 00:10:02.838 tick 100 00:10:02.838 test_end 00:10:02.838 00:10:02.838 real 0m1.612s 00:10:02.838 user 0m1.397s 00:10:02.838 sys 0m0.104s 00:10:02.838 18:13:55 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.838 ************************************ 00:10:02.838 END TEST event_reactor 00:10:02.838 ************************************ 00:10:02.838 18:13:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:02.838 18:13:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:02.838 18:13:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:02.838 18:13:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.838 18:13:55 event -- common/autotest_common.sh@10 -- # set +x 00:10:02.838 ************************************ 00:10:02.838 START TEST event_reactor_perf 00:10:02.838 ************************************ 00:10:02.838 18:13:55 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:02.838 [2024-11-26 18:13:56.044445] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:10:02.838 [2024-11-26 18:13:56.044589] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59396 ] 00:10:03.098 [2024-11-26 18:13:56.222100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.098 [2024-11-26 18:13:56.351125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.481 test_start 00:10:04.481 test_end 00:10:04.481 Performance: 345662 events per second 00:10:04.481 00:10:04.481 real 0m1.597s 00:10:04.481 user 0m1.398s 00:10:04.481 sys 0m0.090s 00:10:04.481 ************************************ 00:10:04.481 END TEST event_reactor_perf 00:10:04.481 ************************************ 00:10:04.481 18:13:57 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.481 18:13:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:04.481 18:13:57 event -- event/event.sh@49 -- # uname -s 00:10:04.481 18:13:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:04.481 18:13:57 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:04.481 18:13:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.481 18:13:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.481 18:13:57 event -- common/autotest_common.sh@10 -- # set +x 00:10:04.481 ************************************ 00:10:04.481 START TEST event_scheduler 00:10:04.481 ************************************ 00:10:04.481 18:13:57 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:04.481 * Looking for test storage... 00:10:04.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:04.481 18:13:57 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.481 18:13:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.481 18:13:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.740 18:13:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.740 --rc genhtml_branch_coverage=1 00:10:04.740 --rc genhtml_function_coverage=1 00:10:04.740 --rc genhtml_legend=1 00:10:04.740 --rc geninfo_all_blocks=1 00:10:04.740 --rc geninfo_unexecuted_blocks=1 00:10:04.740 00:10:04.740 ' 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.740 --rc genhtml_branch_coverage=1 00:10:04.740 --rc genhtml_function_coverage=1 00:10:04.740 --rc genhtml_legend=1 00:10:04.740 --rc geninfo_all_blocks=1 00:10:04.740 --rc geninfo_unexecuted_blocks=1 00:10:04.740 00:10:04.740 ' 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.740 --rc genhtml_branch_coverage=1 00:10:04.740 --rc genhtml_function_coverage=1 00:10:04.740 --rc genhtml_legend=1 00:10:04.740 --rc geninfo_all_blocks=1 00:10:04.740 --rc geninfo_unexecuted_blocks=1 00:10:04.740 00:10:04.740 ' 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.740 --rc genhtml_branch_coverage=1 00:10:04.740 --rc genhtml_function_coverage=1 00:10:04.740 --rc genhtml_legend=1 00:10:04.740 --rc geninfo_all_blocks=1 00:10:04.740 --rc geninfo_unexecuted_blocks=1 00:10:04.740 00:10:04.740 ' 00:10:04.740 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:04.740 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59472 00:10:04.740 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:04.740 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:04.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.740 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59472 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59472 ']' 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.740 18:13:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:04.740 [2024-11-26 18:13:57.991753] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:10:04.740 [2024-11-26 18:13:57.991889] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59472 ] 00:10:04.999 [2024-11-26 18:13:58.172456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.000 [2024-11-26 18:13:58.305337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.000 [2024-11-26 18:13:58.305468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.000 [2024-11-26 18:13:58.305645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.000 [2024-11-26 18:13:58.305696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.568 18:13:58 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.568 18:13:58 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:05.568 18:13:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:05.568 18:13:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.568 18:13:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:05.568 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:05.568 POWER: Cannot set governor of lcore 0 to userspace 00:10:05.568 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:05.568 POWER: Cannot set governor of lcore 0 to performance 00:10:05.568 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:05.568 POWER: Cannot set governor of lcore 0 to userspace 00:10:05.568 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:05.568 POWER: Cannot set governor of lcore 0 to userspace 00:10:05.568 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:05.568 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:05.568 POWER: Unable to set Power Management Environment for lcore 0 00:10:05.568 [2024-11-26 18:13:58.898687] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:10:05.568 [2024-11-26 18:13:58.898716] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:10:05.568 [2024-11-26 18:13:58.898729] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:05.568 [2024-11-26 18:13:58.898750] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:05.568 [2024-11-26 18:13:58.898761] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:05.568 [2024-11-26 18:13:58.898772] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:05.826 18:13:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.826 18:13:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:05.826 18:13:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.826 18:13:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 [2024-11-26 18:13:59.255156] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:06.086 18:13:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:06.086 18:13:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.086 18:13:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 ************************************ 00:10:06.086 START TEST scheduler_create_thread 00:10:06.086 ************************************ 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 2 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 3 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 4 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 5 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 6 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 7 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 8 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 9 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 10 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.086 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.653 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.653 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:06.653 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.653 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.061 18:14:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.061 18:14:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:08.061 18:14:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:08.061 18:14:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.061 18:14:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:09.439 18:14:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.439 00:10:09.439 real 0m3.100s 00:10:09.439 user 0m0.030s 00:10:09.439 sys 0m0.006s 00:10:09.439 18:14:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.439 18:14:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:09.439 ************************************ 00:10:09.439 END TEST scheduler_create_thread 00:10:09.439 ************************************ 00:10:09.439 18:14:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:09.439 18:14:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59472 00:10:09.439 18:14:02 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59472 ']' 00:10:09.439 18:14:02 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59472 00:10:09.439 18:14:02 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:09.439 18:14:02 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.439 18:14:02 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59472 00:10:09.439 killing process with pid 59472 00:10:09.439 18:14:02 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:09.439 18:14:02 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:09.439 18:14:02 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59472' 00:10:09.439 18:14:02 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59472 00:10:09.439 18:14:02 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59472 00:10:09.439 [2024-11-26 18:14:02.748725] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:10.816 00:10:10.816 real 0m6.390s 00:10:10.817 user 0m13.177s 00:10:10.817 sys 0m0.512s 00:10:10.817 18:14:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.817 ************************************ 00:10:10.817 END TEST event_scheduler 00:10:10.817 ************************************ 00:10:10.817 18:14:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:10.817 18:14:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:10.817 18:14:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:10.817 18:14:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.817 18:14:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.817 18:14:04 event -- common/autotest_common.sh@10 -- # set +x 00:10:10.817 ************************************ 00:10:10.817 START TEST app_repeat 00:10:10.817 ************************************ 00:10:10.817 18:14:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59589 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59589' 00:10:10.817 Process app_repeat pid: 59589 00:10:10.817 18:14:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:11.076 spdk_app_start Round 0 00:10:11.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:11.076 18:14:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:11.076 18:14:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59589 /var/tmp/spdk-nbd.sock 00:10:11.076 18:14:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59589 ']' 00:10:11.076 18:14:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:11.076 18:14:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.076 18:14:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:11.076 18:14:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.076 18:14:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:11.076 [2024-11-26 18:14:04.205887] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:10:11.076 [2024-11-26 18:14:04.206013] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59589 ] 00:10:11.076 [2024-11-26 18:14:04.383293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:11.340 [2024-11-26 18:14:04.506984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.340 [2024-11-26 18:14:04.507017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.910 18:14:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.910 18:14:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:11.910 18:14:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:12.168 Malloc0 00:10:12.168 18:14:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:12.426 Malloc1 00:10:12.685 18:14:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:12.685 18:14:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:12.686 18:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:12.686 18:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:12.686 18:14:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:12.686 /dev/nbd0 00:10:12.944 18:14:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:12.944 18:14:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:12.944 18:14:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:12.944 18:14:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:12.944 18:14:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:12.944 18:14:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:12.944 18:14:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:12.944 18:14:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:12.944 18:14:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:12.944 18:14:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:12.944 18:14:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:12.944 1+0 records in 00:10:12.944 1+0 records out 00:10:12.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306739 s, 13.4 MB/s 00:10:12.945 18:14:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:12.945 18:14:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:12.945 18:14:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:12.945 18:14:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:12.945 18:14:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:12.945 18:14:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:12.945 18:14:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:12.945 18:14:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:13.204 /dev/nbd1 00:10:13.204 18:14:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:13.204 18:14:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:13.204 1+0 records in 00:10:13.204 1+0 records out 00:10:13.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038586 s, 10.6 MB/s 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:13.204 18:14:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:13.204 18:14:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:13.204 18:14:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:13.204 18:14:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:13.204 18:14:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.204 18:14:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:13.462 { 00:10:13.462 "nbd_device": "/dev/nbd0", 00:10:13.462 "bdev_name": "Malloc0" 00:10:13.462 }, 00:10:13.462 { 00:10:13.462 "nbd_device": "/dev/nbd1", 00:10:13.462 "bdev_name": "Malloc1" 00:10:13.462 } 00:10:13.462 ]' 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:13.462 { 00:10:13.462 "nbd_device": "/dev/nbd0", 00:10:13.462 "bdev_name": "Malloc0" 00:10:13.462 }, 00:10:13.462 { 00:10:13.462 "nbd_device": "/dev/nbd1", 00:10:13.462 "bdev_name": "Malloc1" 00:10:13.462 } 00:10:13.462 ]' 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:13.462 /dev/nbd1' 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:13.462 /dev/nbd1' 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:13.462 18:14:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:13.463 256+0 records in 00:10:13.463 256+0 records out 00:10:13.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128653 s, 81.5 MB/s 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:13.463 256+0 records in 00:10:13.463 256+0 records out 00:10:13.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254658 s, 41.2 MB/s 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:13.463 256+0 records in 00:10:13.463 256+0 records out 00:10:13.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270737 s, 38.7 MB/s 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:13.463 18:14:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:13.720 18:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:13.720 18:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:13.720 18:14:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:13.720 18:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:13.720 18:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:13.720 18:14:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:13.720 18:14:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:13.720 18:14:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:13.720 18:14:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:13.720 18:14:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.022 18:14:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:14.282 18:14:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:14.282 18:14:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:14.851 18:14:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:16.228 [2024-11-26 18:14:09.275896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:16.228 [2024-11-26 18:14:09.395489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.228 [2024-11-26 18:14:09.395491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.487 [2024-11-26 18:14:09.596086] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:16.487 [2024-11-26 18:14:09.596196] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:17.862 18:14:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:17.863 spdk_app_start Round 1 00:10:17.863 18:14:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:17.863 18:14:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59589 /var/tmp/spdk-nbd.sock 00:10:17.863 18:14:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59589 ']' 00:10:17.863 18:14:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:17.863 18:14:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:17.863 18:14:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:17.863 18:14:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.863 18:14:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:18.122 18:14:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.122 18:14:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:18.122 18:14:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:18.381 Malloc0 00:10:18.381 18:14:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:18.639 Malloc1 00:10:18.639 18:14:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:18.639 18:14:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:18.639 18:14:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:18.639 18:14:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:18.639 18:14:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:18.639 18:14:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:18.639 18:14:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:18.639 18:14:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:18.639 18:14:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:18.639 18:14:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:18.640 18:14:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:18.640 18:14:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:18.640 18:14:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:18.640 18:14:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:18.640 18:14:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:18.640 18:14:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:18.899 /dev/nbd0 00:10:18.899 18:14:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:18.899 18:14:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:18.899 1+0 records in 00:10:18.899 1+0 records out 00:10:18.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353615 s, 11.6 MB/s 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:18.899 18:14:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:18.899 18:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:18.899 18:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:18.899 18:14:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:19.158 /dev/nbd1 00:10:19.158 18:14:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:19.158 18:14:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:19.158 1+0 records in 00:10:19.158 1+0 records out 00:10:19.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349046 s, 11.7 MB/s 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:19.158 18:14:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:19.158 18:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:19.158 18:14:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:19.158 18:14:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:19.158 18:14:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:19.158 18:14:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:19.417 18:14:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:19.417 { 00:10:19.417 "nbd_device": "/dev/nbd0", 00:10:19.417 "bdev_name": "Malloc0" 00:10:19.417 }, 00:10:19.417 { 00:10:19.417 "nbd_device": "/dev/nbd1", 00:10:19.417 "bdev_name": "Malloc1" 00:10:19.417 } 00:10:19.417 ]' 00:10:19.417 18:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:19.417 { 00:10:19.417 "nbd_device": "/dev/nbd0", 00:10:19.417 "bdev_name": "Malloc0" 00:10:19.417 }, 00:10:19.417 { 00:10:19.418 "nbd_device": "/dev/nbd1", 00:10:19.418 "bdev_name": "Malloc1" 00:10:19.418 } 00:10:19.418 ]' 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:19.418 /dev/nbd1' 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:19.418 /dev/nbd1' 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:19.418 256+0 records in 00:10:19.418 256+0 records out 00:10:19.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122478 s, 85.6 MB/s 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:19.418 256+0 records in 00:10:19.418 256+0 records out 00:10:19.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208797 s, 50.2 MB/s 00:10:19.418 18:14:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:19.679 256+0 records in 00:10:19.679 256+0 records out 00:10:19.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263579 s, 39.8 MB/s 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:19.679 18:14:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:19.939 18:14:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:20.199 18:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:20.484 18:14:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:20.484 18:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:20.484 18:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:20.484 18:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:20.484 18:14:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:20.484 18:14:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:20.484 18:14:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:20.484 18:14:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:20.484 18:14:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:20.484 18:14:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:20.747 18:14:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:22.126 [2024-11-26 18:14:15.238604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:22.126 [2024-11-26 18:14:15.358858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.126 [2024-11-26 18:14:15.358882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.385 [2024-11-26 18:14:15.565925] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:22.385 [2024-11-26 18:14:15.566028] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:23.761 18:14:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:23.761 spdk_app_start Round 2 00:10:23.761 18:14:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:23.761 18:14:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59589 /var/tmp/spdk-nbd.sock 00:10:23.761 18:14:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59589 ']' 00:10:23.761 18:14:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:23.761 18:14:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:23.761 18:14:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:23.761 18:14:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.761 18:14:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:24.020 18:14:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.020 18:14:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:24.020 18:14:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:24.279 Malloc0 00:10:24.279 18:14:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:24.539 Malloc1 00:10:24.539 18:14:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:24.539 18:14:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:24.798 /dev/nbd0 00:10:24.798 18:14:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:24.798 18:14:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:24.798 1+0 records in 00:10:24.798 1+0 records out 00:10:24.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038702 s, 10.6 MB/s 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:24.798 18:14:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:24.798 18:14:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:24.798 18:14:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:24.799 18:14:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:25.057 /dev/nbd1 00:10:25.057 18:14:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:25.057 18:14:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:25.057 1+0 records in 00:10:25.057 1+0 records out 00:10:25.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458153 s, 8.9 MB/s 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:25.057 18:14:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:25.057 18:14:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:25.057 18:14:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:25.057 18:14:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:25.057 18:14:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.057 18:14:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:25.316 { 00:10:25.316 "nbd_device": "/dev/nbd0", 00:10:25.316 "bdev_name": "Malloc0" 00:10:25.316 }, 00:10:25.316 { 00:10:25.316 "nbd_device": "/dev/nbd1", 00:10:25.316 "bdev_name": "Malloc1" 00:10:25.316 } 00:10:25.316 ]' 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:25.316 { 00:10:25.316 "nbd_device": "/dev/nbd0", 00:10:25.316 "bdev_name": "Malloc0" 00:10:25.316 }, 00:10:25.316 { 00:10:25.316 "nbd_device": "/dev/nbd1", 00:10:25.316 "bdev_name": "Malloc1" 00:10:25.316 } 00:10:25.316 ]' 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:25.316 /dev/nbd1' 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:25.316 /dev/nbd1' 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:25.316 18:14:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:25.576 256+0 records in 00:10:25.576 256+0 records out 00:10:25.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125052 s, 83.9 MB/s 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:25.576 256+0 records in 00:10:25.576 256+0 records out 00:10:25.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021285 s, 49.3 MB/s 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:25.576 256+0 records in 00:10:25.576 256+0 records out 00:10:25.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251264 s, 41.7 MB/s 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.576 18:14:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:25.835 18:14:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:25.835 18:14:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:25.835 18:14:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:25.835 18:14:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.835 18:14:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.835 18:14:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:25.835 18:14:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:25.835 18:14:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.835 18:14:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.835 18:14:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:25.835 18:14:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:25.835 18:14:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:25.835 18:14:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:25.835 18:14:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.835 18:14:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.835 18:14:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:26.094 18:14:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:26.094 18:14:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.094 18:14:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:26.094 18:14:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.094 18:14:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:26.094 18:14:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:26.094 18:14:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:26.094 18:14:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:26.354 18:14:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:26.354 18:14:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:26.354 18:14:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:26.354 18:14:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:26.354 18:14:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:26.354 18:14:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:26.354 18:14:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:26.354 18:14:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:26.354 18:14:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:26.354 18:14:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:26.612 18:14:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:27.991 [2024-11-26 18:14:21.088731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:27.991 [2024-11-26 18:14:21.206121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.991 [2024-11-26 18:14:21.206122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.250 [2024-11-26 18:14:21.402421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:28.250 [2024-11-26 18:14:21.402519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:29.640 18:14:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59589 /var/tmp/spdk-nbd.sock 00:10:29.640 18:14:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59589 ']' 00:10:29.640 18:14:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:29.640 18:14:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:29.640 18:14:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:29.640 18:14:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.640 18:14:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:29.904 18:14:23 event.app_repeat -- event/event.sh@39 -- # killprocess 59589 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59589 ']' 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59589 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59589 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.904 killing process with pid 59589 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59589' 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59589 00:10:29.904 18:14:23 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59589 00:10:31.280 spdk_app_start is called in Round 0. 00:10:31.280 Shutdown signal received, stop current app iteration 00:10:31.280 Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 reinitialization... 00:10:31.280 spdk_app_start is called in Round 1. 00:10:31.280 Shutdown signal received, stop current app iteration 00:10:31.280 Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 reinitialization... 00:10:31.280 spdk_app_start is called in Round 2. 00:10:31.280 Shutdown signal received, stop current app iteration 00:10:31.280 Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 reinitialization... 00:10:31.280 spdk_app_start is called in Round 3. 00:10:31.280 Shutdown signal received, stop current app iteration 00:10:31.280 18:14:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:31.280 18:14:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:31.280 00:10:31.280 real 0m20.169s 00:10:31.280 user 0m43.516s 00:10:31.280 sys 0m2.986s 00:10:31.280 18:14:24 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.280 18:14:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:31.280 ************************************ 00:10:31.280 END TEST app_repeat 00:10:31.280 ************************************ 00:10:31.280 18:14:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:31.280 18:14:24 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:31.280 18:14:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.280 18:14:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.280 18:14:24 event -- common/autotest_common.sh@10 -- # set +x 00:10:31.280 ************************************ 00:10:31.280 START TEST cpu_locks 00:10:31.280 ************************************ 00:10:31.280 18:14:24 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:31.280 * Looking for test storage... 00:10:31.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:31.280 18:14:24 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:31.280 18:14:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:10:31.280 18:14:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.280 18:14:24 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.280 18:14:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.280 18:14:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.280 18:14:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.280 18:14:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.280 18:14:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.280 18:14:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.280 18:14:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.280 18:14:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.280 18:14:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.280 18:14:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.281 18:14:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:31.281 18:14:24 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.281 18:14:24 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.281 --rc genhtml_branch_coverage=1 00:10:31.281 --rc genhtml_function_coverage=1 00:10:31.281 --rc genhtml_legend=1 00:10:31.281 --rc geninfo_all_blocks=1 00:10:31.281 --rc geninfo_unexecuted_blocks=1 00:10:31.281 00:10:31.281 ' 00:10:31.281 18:14:24 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.281 --rc genhtml_branch_coverage=1 00:10:31.281 --rc genhtml_function_coverage=1 00:10:31.281 --rc genhtml_legend=1 00:10:31.281 --rc geninfo_all_blocks=1 00:10:31.281 --rc geninfo_unexecuted_blocks=1 00:10:31.281 00:10:31.281 ' 00:10:31.281 18:14:24 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.281 --rc genhtml_branch_coverage=1 00:10:31.281 --rc genhtml_function_coverage=1 00:10:31.281 --rc genhtml_legend=1 00:10:31.281 --rc geninfo_all_blocks=1 00:10:31.281 --rc geninfo_unexecuted_blocks=1 00:10:31.281 00:10:31.281 ' 00:10:31.281 18:14:24 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.281 --rc genhtml_branch_coverage=1 00:10:31.281 --rc genhtml_function_coverage=1 00:10:31.281 --rc genhtml_legend=1 00:10:31.281 --rc geninfo_all_blocks=1 00:10:31.281 --rc geninfo_unexecuted_blocks=1 00:10:31.281 00:10:31.281 ' 00:10:31.281 18:14:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:31.281 18:14:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:31.281 18:14:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:31.281 18:14:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:31.281 18:14:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:31.281 18:14:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.281 18:14:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:31.540 ************************************ 00:10:31.540 START TEST default_locks 00:10:31.540 ************************************ 00:10:31.540 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:31.540 18:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60044 00:10:31.540 18:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:31.540 18:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60044 00:10:31.540 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60044 ']' 00:10:31.540 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.540 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.540 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.540 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.540 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:31.540 [2024-11-26 18:14:24.722821] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:10:31.540 [2024-11-26 18:14:24.722958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60044 ] 00:10:31.800 [2024-11-26 18:14:24.882606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.801 [2024-11-26 18:14:25.040973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.180 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.180 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:33.180 18:14:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60044 00:10:33.180 18:14:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60044 00:10:33.180 18:14:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60044 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60044 ']' 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60044 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60044 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.438 killing process with pid 60044 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60044' 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60044 00:10:33.438 18:14:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60044 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60044 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60044 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60044 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60044 ']' 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:36.725 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60044) - No such process 00:10:36.725 ERROR: process (pid: 60044) is no longer running 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:36.725 18:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:36.726 18:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:36.726 18:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:36.726 00:10:36.726 real 0m4.895s 00:10:36.726 user 0m4.704s 00:10:36.726 sys 0m0.859s 00:10:36.726 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.726 18:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:36.726 ************************************ 00:10:36.726 END TEST default_locks 00:10:36.726 ************************************ 00:10:36.726 18:14:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:36.726 18:14:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.726 18:14:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.726 18:14:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:36.726 ************************************ 00:10:36.726 START TEST default_locks_via_rpc 00:10:36.726 ************************************ 00:10:36.726 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:36.726 18:14:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60133 00:10:36.726 18:14:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:36.726 18:14:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60133 00:10:36.726 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60133 ']' 00:10:36.726 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.726 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.726 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.726 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.726 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.726 [2024-11-26 18:14:29.684934] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:10:36.726 [2024-11-26 18:14:29.685067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60133 ] 00:10:36.726 [2024-11-26 18:14:29.863080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.726 [2024-11-26 18:14:30.008005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60133 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60133 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60133 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60133 ']' 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60133 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.102 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60133 00:10:38.362 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.362 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.362 killing process with pid 60133 00:10:38.362 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60133' 00:10:38.362 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60133 00:10:38.362 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60133 00:10:41.650 00:10:41.650 real 0m4.827s 00:10:41.650 user 0m4.573s 00:10:41.650 sys 0m0.835s 00:10:41.650 18:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.650 18:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.650 ************************************ 00:10:41.650 END TEST default_locks_via_rpc 00:10:41.650 ************************************ 00:10:41.650 18:14:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:41.650 18:14:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.650 18:14:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.650 18:14:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:41.650 ************************************ 00:10:41.650 START TEST non_locking_app_on_locked_coremask 00:10:41.650 ************************************ 00:10:41.650 18:14:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:41.650 18:14:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60207 00:10:41.650 18:14:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:41.650 18:14:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60207 /var/tmp/spdk.sock 00:10:41.650 18:14:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60207 ']' 00:10:41.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.650 18:14:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.650 18:14:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.651 18:14:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.651 18:14:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.651 18:14:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:41.651 [2024-11-26 18:14:34.576940] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:10:41.651 [2024-11-26 18:14:34.577080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60207 ] 00:10:41.651 [2024-11-26 18:14:34.743365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.651 [2024-11-26 18:14:34.906654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60229 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60229 /var/tmp/spdk2.sock 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60229 ']' 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:42.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.585 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:42.843 [2024-11-26 18:14:35.928870] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:10:42.843 [2024-11-26 18:14:35.929060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60229 ] 00:10:42.843 [2024-11-26 18:14:36.110412] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:42.843 [2024-11-26 18:14:36.110484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.101 [2024-11-26 18:14:36.350251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.634 18:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.634 18:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:45.634 18:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60207 00:10:45.634 18:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60207 00:10:45.634 18:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60207 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60207 ']' 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60207 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60207 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.893 killing process with pid 60207 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60207' 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60207 00:10:45.893 18:14:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60207 00:10:52.457 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60229 00:10:52.457 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60229 ']' 00:10:52.457 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60229 00:10:52.457 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:52.457 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.457 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60229 00:10:52.457 killing process with pid 60229 00:10:52.457 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.458 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.458 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60229' 00:10:52.458 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60229 00:10:52.458 18:14:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60229 00:10:54.361 ************************************ 00:10:54.361 END TEST non_locking_app_on_locked_coremask 00:10:54.361 ************************************ 00:10:54.361 00:10:54.361 real 0m13.110s 00:10:54.361 user 0m13.415s 00:10:54.361 sys 0m1.319s 00:10:54.361 18:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.361 18:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:54.361 18:14:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:54.361 18:14:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:54.361 18:14:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.361 18:14:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:54.361 ************************************ 00:10:54.361 START TEST locking_app_on_unlocked_coremask 00:10:54.361 ************************************ 00:10:54.361 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:54.361 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60393 00:10:54.361 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:54.361 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60393 /var/tmp/spdk.sock 00:10:54.361 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60393 ']' 00:10:54.361 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.361 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.361 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.361 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.361 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:54.619 [2024-11-26 18:14:47.752248] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:10:54.619 [2024-11-26 18:14:47.752468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60393 ] 00:10:54.619 [2024-11-26 18:14:47.925495] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:54.619 [2024-11-26 18:14:47.925652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.878 [2024-11-26 18:14:48.053359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60409 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60409 /var/tmp/spdk2.sock 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60409 ']' 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:55.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.849 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:55.849 [2024-11-26 18:14:49.038844] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:10:55.849 [2024-11-26 18:14:49.039059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60409 ] 00:10:56.108 [2024-11-26 18:14:49.212101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.367 [2024-11-26 18:14:49.458722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.903 18:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.903 18:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:58.903 18:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60409 00:10:58.903 18:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:58.903 18:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60409 00:10:58.903 18:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60393 00:10:58.903 18:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60393 ']' 00:10:58.903 18:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60393 00:10:58.903 18:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:58.903 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.903 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60393 00:10:58.903 killing process with pid 60393 00:10:58.903 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.903 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.903 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60393' 00:10:58.903 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60393 00:10:58.903 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60393 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60409 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60409 ']' 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60409 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60409 00:11:04.181 killing process with pid 60409 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60409' 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60409 00:11:04.181 18:14:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60409 00:11:06.733 00:11:06.733 real 0m12.045s 00:11:06.733 user 0m12.318s 00:11:06.733 sys 0m1.265s 00:11:06.733 ************************************ 00:11:06.733 END TEST locking_app_on_unlocked_coremask 00:11:06.733 ************************************ 00:11:06.733 18:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.733 18:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:06.733 18:14:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:06.733 18:14:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:06.733 18:14:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.733 18:14:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:06.733 ************************************ 00:11:06.733 START TEST locking_app_on_locked_coremask 00:11:06.733 ************************************ 00:11:06.733 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:06.733 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60563 00:11:06.733 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60563 /var/tmp/spdk.sock 00:11:06.733 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:06.733 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60563 ']' 00:11:06.734 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.734 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.734 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.734 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.734 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:06.734 [2024-11-26 18:14:59.863592] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:06.734 [2024-11-26 18:14:59.863837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60563 ] 00:11:06.734 [2024-11-26 18:15:00.041778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.992 [2024-11-26 18:15:00.157090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60579 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60579 /var/tmp/spdk2.sock 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60579 /var/tmp/spdk2.sock 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60579 /var/tmp/spdk2.sock 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60579 ']' 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:07.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.929 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:07.929 [2024-11-26 18:15:01.174056] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:07.929 [2024-11-26 18:15:01.174344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60579 ] 00:11:08.187 [2024-11-26 18:15:01.361255] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60563 has claimed it. 00:11:08.187 [2024-11-26 18:15:01.361325] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:08.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60579) - No such process 00:11:08.446 ERROR: process (pid: 60579) is no longer running 00:11:08.446 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.446 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:08.446 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:08.446 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:08.446 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:08.446 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:08.446 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60563 00:11:08.705 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60563 00:11:08.705 18:15:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:08.964 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60563 00:11:08.964 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60563 ']' 00:11:08.964 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60563 00:11:08.964 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:08.964 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.964 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60563 00:11:09.223 killing process with pid 60563 00:11:09.223 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.223 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.223 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60563' 00:11:09.223 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60563 00:11:09.223 18:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60563 00:11:11.757 00:11:11.757 real 0m5.210s 00:11:11.757 user 0m5.468s 00:11:11.757 sys 0m0.838s 00:11:11.757 18:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.757 18:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:11.757 ************************************ 00:11:11.757 END TEST locking_app_on_locked_coremask 00:11:11.757 ************************************ 00:11:11.757 18:15:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:11.757 18:15:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:11.757 18:15:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.757 18:15:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:11.757 ************************************ 00:11:11.757 START TEST locking_overlapped_coremask 00:11:11.757 ************************************ 00:11:11.757 18:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:11.757 18:15:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60654 00:11:11.757 18:15:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:11.757 18:15:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60654 /var/tmp/spdk.sock 00:11:11.758 18:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60654 ']' 00:11:11.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.758 18:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.758 18:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.758 18:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.758 18:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.758 18:15:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:12.016 [2024-11-26 18:15:05.157551] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:12.016 [2024-11-26 18:15:05.157710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60654 ] 00:11:12.016 [2024-11-26 18:15:05.339571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:12.275 [2024-11-26 18:15:05.481753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.275 [2024-11-26 18:15:05.481933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.275 [2024-11-26 18:15:05.481977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60678 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60678 /var/tmp/spdk2.sock 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60678 /var/tmp/spdk2.sock 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60678 /var/tmp/spdk2.sock 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60678 ']' 00:11:13.213 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:13.214 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.214 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:13.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:13.214 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.214 18:15:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:13.472 [2024-11-26 18:15:06.610203] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:13.472 [2024-11-26 18:15:06.610432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60678 ] 00:11:13.472 [2024-11-26 18:15:06.797812] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60654 has claimed it. 00:11:13.472 [2024-11-26 18:15:06.797886] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:14.040 ERROR: process (pid: 60678) is no longer running 00:11:14.040 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60678) - No such process 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60654 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60654 ']' 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60654 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60654 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60654' 00:11:14.040 killing process with pid 60654 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60654 00:11:14.040 18:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60654 00:11:17.327 00:11:17.327 real 0m4.974s 00:11:17.327 user 0m13.547s 00:11:17.327 sys 0m0.628s 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:17.327 ************************************ 00:11:17.327 END TEST locking_overlapped_coremask 00:11:17.327 ************************************ 00:11:17.327 18:15:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:17.327 18:15:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:17.327 18:15:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.327 18:15:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:17.327 ************************************ 00:11:17.327 START TEST locking_overlapped_coremask_via_rpc 00:11:17.327 ************************************ 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60747 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60747 /var/tmp/spdk.sock 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60747 ']' 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.327 18:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.327 [2024-11-26 18:15:10.187769] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:17.327 [2024-11-26 18:15:10.187928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60747 ] 00:11:17.327 [2024-11-26 18:15:10.373856] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:17.327 [2024-11-26 18:15:10.373932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:17.327 [2024-11-26 18:15:10.503083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.327 [2024-11-26 18:15:10.503307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.327 [2024-11-26 18:15:10.503354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60767 00:11:18.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60767 /var/tmp/spdk2.sock 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60767 ']' 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.262 18:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.519 [2024-11-26 18:15:11.655484] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:18.519 [2024-11-26 18:15:11.655716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60767 ] 00:11:18.519 [2024-11-26 18:15:11.844532] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:18.519 [2024-11-26 18:15:11.844615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.112 [2024-11-26 18:15:12.124886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.112 [2024-11-26 18:15:12.128768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.112 [2024-11-26 18:15:12.128795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.648 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.649 [2024-11-26 18:15:14.377030] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60747 has claimed it. 00:11:21.649 request: 00:11:21.649 { 00:11:21.649 "method": "framework_enable_cpumask_locks", 00:11:21.649 "req_id": 1 00:11:21.649 } 00:11:21.649 Got JSON-RPC error response 00:11:21.649 response: 00:11:21.649 { 00:11:21.649 "code": -32603, 00:11:21.649 "message": "Failed to claim CPU core: 2" 00:11:21.649 } 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60747 /var/tmp/spdk.sock 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60747 ']' 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60767 /var/tmp/spdk2.sock 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60767 ']' 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:21.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:21.649 00:11:21.649 real 0m4.850s 00:11:21.649 user 0m1.553s 00:11:21.649 sys 0m0.230s 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.649 18:15:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.649 ************************************ 00:11:21.649 END TEST locking_overlapped_coremask_via_rpc 00:11:21.649 ************************************ 00:11:21.649 18:15:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:21.649 18:15:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60747 ]] 00:11:21.649 18:15:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60747 00:11:21.649 18:15:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60747 ']' 00:11:21.649 18:15:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60747 00:11:21.908 18:15:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:21.908 18:15:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.908 18:15:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60747 00:11:21.908 killing process with pid 60747 00:11:21.908 18:15:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.908 18:15:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.908 18:15:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60747' 00:11:21.908 18:15:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60747 00:11:21.908 18:15:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60747 00:11:24.442 18:15:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60767 ]] 00:11:24.442 18:15:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60767 00:11:24.442 18:15:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60767 ']' 00:11:24.442 18:15:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60767 00:11:24.442 18:15:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:24.442 18:15:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.442 18:15:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60767 00:11:24.442 killing process with pid 60767 00:11:24.442 18:15:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:24.442 18:15:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:24.442 18:15:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60767' 00:11:24.442 18:15:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60767 00:11:24.442 18:15:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60767 00:11:27.727 18:15:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:27.727 18:15:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:27.727 18:15:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60747 ]] 00:11:27.727 18:15:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60747 00:11:27.727 18:15:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60747 ']' 00:11:27.727 Process with pid 60747 is not found 00:11:27.727 18:15:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60747 00:11:27.727 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60747) - No such process 00:11:27.727 18:15:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60747 is not found' 00:11:27.727 18:15:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60767 ]] 00:11:27.727 18:15:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60767 00:11:27.727 18:15:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60767 ']' 00:11:27.727 18:15:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60767 00:11:27.727 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60767) - No such process 00:11:27.727 18:15:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60767 is not found' 00:11:27.727 Process with pid 60767 is not found 00:11:27.727 18:15:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:27.727 00:11:27.727 real 0m56.017s 00:11:27.727 user 1m35.299s 00:11:27.727 sys 0m7.232s 00:11:27.728 18:15:20 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.728 18:15:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:27.728 ************************************ 00:11:27.728 END TEST cpu_locks 00:11:27.728 ************************************ 00:11:27.728 00:11:27.728 real 1m28.023s 00:11:27.728 user 2m39.385s 00:11:27.728 sys 0m11.451s 00:11:27.728 18:15:20 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.728 18:15:20 event -- common/autotest_common.sh@10 -- # set +x 00:11:27.728 ************************************ 00:11:27.728 END TEST event 00:11:27.728 ************************************ 00:11:27.728 18:15:20 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:27.728 18:15:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:27.728 18:15:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.728 18:15:20 -- common/autotest_common.sh@10 -- # set +x 00:11:27.728 ************************************ 00:11:27.728 START TEST thread 00:11:27.728 ************************************ 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:27.728 * Looking for test storage... 00:11:27.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:27.728 18:15:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.728 18:15:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.728 18:15:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.728 18:15:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.728 18:15:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.728 18:15:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.728 18:15:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.728 18:15:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.728 18:15:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.728 18:15:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.728 18:15:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.728 18:15:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:27.728 18:15:20 thread -- scripts/common.sh@345 -- # : 1 00:11:27.728 18:15:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.728 18:15:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.728 18:15:20 thread -- scripts/common.sh@365 -- # decimal 1 00:11:27.728 18:15:20 thread -- scripts/common.sh@353 -- # local d=1 00:11:27.728 18:15:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.728 18:15:20 thread -- scripts/common.sh@355 -- # echo 1 00:11:27.728 18:15:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.728 18:15:20 thread -- scripts/common.sh@366 -- # decimal 2 00:11:27.728 18:15:20 thread -- scripts/common.sh@353 -- # local d=2 00:11:27.728 18:15:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.728 18:15:20 thread -- scripts/common.sh@355 -- # echo 2 00:11:27.728 18:15:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.728 18:15:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.728 18:15:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.728 18:15:20 thread -- scripts/common.sh@368 -- # return 0 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:27.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.728 --rc genhtml_branch_coverage=1 00:11:27.728 --rc genhtml_function_coverage=1 00:11:27.728 --rc genhtml_legend=1 00:11:27.728 --rc geninfo_all_blocks=1 00:11:27.728 --rc geninfo_unexecuted_blocks=1 00:11:27.728 00:11:27.728 ' 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:27.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.728 --rc genhtml_branch_coverage=1 00:11:27.728 --rc genhtml_function_coverage=1 00:11:27.728 --rc genhtml_legend=1 00:11:27.728 --rc geninfo_all_blocks=1 00:11:27.728 --rc geninfo_unexecuted_blocks=1 00:11:27.728 00:11:27.728 ' 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:27.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.728 --rc genhtml_branch_coverage=1 00:11:27.728 --rc genhtml_function_coverage=1 00:11:27.728 --rc genhtml_legend=1 00:11:27.728 --rc geninfo_all_blocks=1 00:11:27.728 --rc geninfo_unexecuted_blocks=1 00:11:27.728 00:11:27.728 ' 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:27.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.728 --rc genhtml_branch_coverage=1 00:11:27.728 --rc genhtml_function_coverage=1 00:11:27.728 --rc genhtml_legend=1 00:11:27.728 --rc geninfo_all_blocks=1 00:11:27.728 --rc geninfo_unexecuted_blocks=1 00:11:27.728 00:11:27.728 ' 00:11:27.728 18:15:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.728 18:15:20 thread -- common/autotest_common.sh@10 -- # set +x 00:11:27.728 ************************************ 00:11:27.728 START TEST thread_poller_perf 00:11:27.728 ************************************ 00:11:27.728 18:15:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:27.728 [2024-11-26 18:15:20.800146] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:27.728 [2024-11-26 18:15:20.800307] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60973 ] 00:11:27.728 [2024-11-26 18:15:20.979348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.987 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:27.987 [2024-11-26 18:15:21.104421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.366 [2024-11-26T18:15:22.701Z] ====================================== 00:11:29.366 [2024-11-26T18:15:22.701Z] busy:2298671700 (cyc) 00:11:29.366 [2024-11-26T18:15:22.701Z] total_run_count: 356000 00:11:29.366 [2024-11-26T18:15:22.701Z] tsc_hz: 2290000000 (cyc) 00:11:29.366 [2024-11-26T18:15:22.701Z] ====================================== 00:11:29.366 [2024-11-26T18:15:22.701Z] poller_cost: 6456 (cyc), 2819 (nsec) 00:11:29.366 00:11:29.366 real 0m1.593s 00:11:29.366 user 0m1.372s 00:11:29.366 sys 0m0.114s 00:11:29.366 ************************************ 00:11:29.366 END TEST thread_poller_perf 00:11:29.366 ************************************ 00:11:29.366 18:15:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.366 18:15:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:29.366 18:15:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:29.366 18:15:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:29.366 18:15:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.366 18:15:22 thread -- common/autotest_common.sh@10 -- # set +x 00:11:29.366 ************************************ 00:11:29.366 START TEST thread_poller_perf 00:11:29.366 ************************************ 00:11:29.366 18:15:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:29.366 [2024-11-26 18:15:22.460566] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:29.366 [2024-11-26 18:15:22.460781] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61010 ] 00:11:29.366 [2024-11-26 18:15:22.629377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.625 [2024-11-26 18:15:22.751845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.625 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:31.061 [2024-11-26T18:15:24.396Z] ====================================== 00:11:31.061 [2024-11-26T18:15:24.396Z] busy:2293749578 (cyc) 00:11:31.061 [2024-11-26T18:15:24.396Z] total_run_count: 4637000 00:11:31.061 [2024-11-26T18:15:24.396Z] tsc_hz: 2290000000 (cyc) 00:11:31.061 [2024-11-26T18:15:24.396Z] ====================================== 00:11:31.061 [2024-11-26T18:15:24.396Z] poller_cost: 494 (cyc), 215 (nsec) 00:11:31.061 00:11:31.061 real 0m1.578s 00:11:31.061 user 0m1.380s 00:11:31.061 sys 0m0.091s 00:11:31.061 ************************************ 00:11:31.061 END TEST thread_poller_perf 00:11:31.061 ************************************ 00:11:31.061 18:15:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.061 18:15:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:31.061 18:15:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:31.061 00:11:31.061 real 0m3.535s 00:11:31.061 user 0m2.915s 00:11:31.061 sys 0m0.422s 00:11:31.061 ************************************ 00:11:31.061 END TEST thread 00:11:31.061 ************************************ 00:11:31.061 18:15:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.061 18:15:24 thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.061 18:15:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:31.061 18:15:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:31.061 18:15:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:31.061 18:15:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.061 18:15:24 -- common/autotest_common.sh@10 -- # set +x 00:11:31.061 ************************************ 00:11:31.061 START TEST app_cmdline 00:11:31.061 ************************************ 00:11:31.061 18:15:24 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:31.061 * Looking for test storage... 00:11:31.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:31.061 18:15:24 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:31.061 18:15:24 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:11:31.061 18:15:24 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:31.061 18:15:24 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.061 18:15:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:31.061 18:15:24 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.061 18:15:24 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:31.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.061 --rc genhtml_branch_coverage=1 00:11:31.061 --rc genhtml_function_coverage=1 00:11:31.061 --rc genhtml_legend=1 00:11:31.061 --rc geninfo_all_blocks=1 00:11:31.061 --rc geninfo_unexecuted_blocks=1 00:11:31.061 00:11:31.061 ' 00:11:31.061 18:15:24 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:31.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.062 --rc genhtml_branch_coverage=1 00:11:31.062 --rc genhtml_function_coverage=1 00:11:31.062 --rc genhtml_legend=1 00:11:31.062 --rc geninfo_all_blocks=1 00:11:31.062 --rc geninfo_unexecuted_blocks=1 00:11:31.062 00:11:31.062 ' 00:11:31.062 18:15:24 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:31.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.062 --rc genhtml_branch_coverage=1 00:11:31.062 --rc genhtml_function_coverage=1 00:11:31.062 --rc genhtml_legend=1 00:11:31.062 --rc geninfo_all_blocks=1 00:11:31.062 --rc geninfo_unexecuted_blocks=1 00:11:31.062 00:11:31.062 ' 00:11:31.062 18:15:24 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:31.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.062 --rc genhtml_branch_coverage=1 00:11:31.062 --rc genhtml_function_coverage=1 00:11:31.062 --rc genhtml_legend=1 00:11:31.062 --rc geninfo_all_blocks=1 00:11:31.062 --rc geninfo_unexecuted_blocks=1 00:11:31.062 00:11:31.062 ' 00:11:31.062 18:15:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:31.062 18:15:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61099 00:11:31.062 18:15:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:31.062 18:15:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61099 00:11:31.062 18:15:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61099 ']' 00:11:31.062 18:15:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.062 18:15:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.062 18:15:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.062 18:15:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.062 18:15:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:31.320 [2024-11-26 18:15:24.428753] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:31.320 [2024-11-26 18:15:24.428961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61099 ] 00:11:31.320 [2024-11-26 18:15:24.608334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.579 [2024-11-26 18:15:24.733114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.515 18:15:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.515 18:15:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:32.515 18:15:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:32.774 { 00:11:32.774 "version": "SPDK v25.01-pre git sha1 e93f0f941", 00:11:32.774 "fields": { 00:11:32.774 "major": 25, 00:11:32.774 "minor": 1, 00:11:32.774 "patch": 0, 00:11:32.774 "suffix": "-pre", 00:11:32.774 "commit": "e93f0f941" 00:11:32.774 } 00:11:32.774 } 00:11:32.774 18:15:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:32.774 18:15:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:32.774 18:15:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:32.774 18:15:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:32.774 18:15:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:32.774 18:15:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:32.774 18:15:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.774 18:15:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:32.774 18:15:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:32.774 18:15:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:32.774 18:15:25 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:33.032 request: 00:11:33.032 { 00:11:33.032 "method": "env_dpdk_get_mem_stats", 00:11:33.032 "req_id": 1 00:11:33.032 } 00:11:33.032 Got JSON-RPC error response 00:11:33.032 response: 00:11:33.032 { 00:11:33.032 "code": -32601, 00:11:33.032 "message": "Method not found" 00:11:33.032 } 00:11:33.032 18:15:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:33.032 18:15:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.032 18:15:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.033 18:15:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61099 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61099 ']' 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61099 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61099 00:11:33.033 killing process with pid 61099 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61099' 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 61099 00:11:33.033 18:15:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 61099 00:11:35.562 00:11:35.562 real 0m4.716s 00:11:35.562 user 0m5.019s 00:11:35.562 sys 0m0.604s 00:11:35.562 18:15:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.562 18:15:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:35.562 ************************************ 00:11:35.562 END TEST app_cmdline 00:11:35.562 ************************************ 00:11:35.562 18:15:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:35.562 18:15:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:35.562 18:15:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.562 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:11:35.562 ************************************ 00:11:35.562 START TEST version 00:11:35.562 ************************************ 00:11:35.562 18:15:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:35.821 * Looking for test storage... 00:11:35.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:35.821 18:15:29 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.821 18:15:29 version -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.821 18:15:29 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.821 18:15:29 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.821 18:15:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.821 18:15:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.821 18:15:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.821 18:15:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.821 18:15:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.821 18:15:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.821 18:15:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.821 18:15:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.821 18:15:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.821 18:15:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.821 18:15:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.821 18:15:29 version -- scripts/common.sh@344 -- # case "$op" in 00:11:35.821 18:15:29 version -- scripts/common.sh@345 -- # : 1 00:11:35.821 18:15:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.821 18:15:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.821 18:15:29 version -- scripts/common.sh@365 -- # decimal 1 00:11:35.821 18:15:29 version -- scripts/common.sh@353 -- # local d=1 00:11:35.821 18:15:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.821 18:15:29 version -- scripts/common.sh@355 -- # echo 1 00:11:35.821 18:15:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.821 18:15:29 version -- scripts/common.sh@366 -- # decimal 2 00:11:35.821 18:15:29 version -- scripts/common.sh@353 -- # local d=2 00:11:35.821 18:15:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.821 18:15:29 version -- scripts/common.sh@355 -- # echo 2 00:11:35.821 18:15:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.821 18:15:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.821 18:15:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.821 18:15:29 version -- scripts/common.sh@368 -- # return 0 00:11:35.821 18:15:29 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.821 18:15:29 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.821 --rc genhtml_branch_coverage=1 00:11:35.821 --rc genhtml_function_coverage=1 00:11:35.821 --rc genhtml_legend=1 00:11:35.821 --rc geninfo_all_blocks=1 00:11:35.821 --rc geninfo_unexecuted_blocks=1 00:11:35.821 00:11:35.821 ' 00:11:35.821 18:15:29 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.821 --rc genhtml_branch_coverage=1 00:11:35.821 --rc genhtml_function_coverage=1 00:11:35.821 --rc genhtml_legend=1 00:11:35.821 --rc geninfo_all_blocks=1 00:11:35.821 --rc geninfo_unexecuted_blocks=1 00:11:35.821 00:11:35.821 ' 00:11:35.821 18:15:29 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.821 --rc genhtml_branch_coverage=1 00:11:35.821 --rc genhtml_function_coverage=1 00:11:35.821 --rc genhtml_legend=1 00:11:35.821 --rc geninfo_all_blocks=1 00:11:35.821 --rc geninfo_unexecuted_blocks=1 00:11:35.821 00:11:35.821 ' 00:11:35.821 18:15:29 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.821 --rc genhtml_branch_coverage=1 00:11:35.821 --rc genhtml_function_coverage=1 00:11:35.821 --rc genhtml_legend=1 00:11:35.821 --rc geninfo_all_blocks=1 00:11:35.821 --rc geninfo_unexecuted_blocks=1 00:11:35.821 00:11:35.821 ' 00:11:35.821 18:15:29 version -- app/version.sh@17 -- # get_header_version major 00:11:35.821 18:15:29 version -- app/version.sh@14 -- # cut -f2 00:11:35.821 18:15:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:35.821 18:15:29 version -- app/version.sh@14 -- # tr -d '"' 00:11:35.821 18:15:29 version -- app/version.sh@17 -- # major=25 00:11:35.821 18:15:29 version -- app/version.sh@18 -- # get_header_version minor 00:11:35.821 18:15:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:35.821 18:15:29 version -- app/version.sh@14 -- # tr -d '"' 00:11:35.821 18:15:29 version -- app/version.sh@14 -- # cut -f2 00:11:35.821 18:15:29 version -- app/version.sh@18 -- # minor=1 00:11:35.821 18:15:29 version -- app/version.sh@19 -- # get_header_version patch 00:11:35.821 18:15:29 version -- app/version.sh@14 -- # cut -f2 00:11:35.821 18:15:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:35.821 18:15:29 version -- app/version.sh@14 -- # tr -d '"' 00:11:35.821 18:15:29 version -- app/version.sh@19 -- # patch=0 00:11:35.821 18:15:29 version -- app/version.sh@20 -- # get_header_version suffix 00:11:35.821 18:15:29 version -- app/version.sh@14 -- # cut -f2 00:11:35.822 18:15:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:35.822 18:15:29 version -- app/version.sh@14 -- # tr -d '"' 00:11:35.822 18:15:29 version -- app/version.sh@20 -- # suffix=-pre 00:11:35.822 18:15:29 version -- app/version.sh@22 -- # version=25.1 00:11:35.822 18:15:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:35.822 18:15:29 version -- app/version.sh@28 -- # version=25.1rc0 00:11:35.822 18:15:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:35.822 18:15:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:36.079 18:15:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:36.079 18:15:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:36.079 00:11:36.079 real 0m0.325s 00:11:36.079 user 0m0.206s 00:11:36.079 sys 0m0.172s 00:11:36.079 18:15:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.079 18:15:29 version -- common/autotest_common.sh@10 -- # set +x 00:11:36.079 ************************************ 00:11:36.079 END TEST version 00:11:36.079 ************************************ 00:11:36.079 18:15:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:36.079 18:15:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:36.079 18:15:29 -- spdk/autotest.sh@194 -- # uname -s 00:11:36.079 18:15:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:36.079 18:15:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:36.079 18:15:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:36.079 18:15:29 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:11:36.079 18:15:29 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:36.079 18:15:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.079 18:15:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.079 18:15:29 -- common/autotest_common.sh@10 -- # set +x 00:11:36.079 ************************************ 00:11:36.079 START TEST blockdev_nvme 00:11:36.079 ************************************ 00:11:36.079 18:15:29 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:11:36.079 * Looking for test storage... 00:11:36.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:36.079 18:15:29 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:36.079 18:15:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:36.079 18:15:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:36.337 18:15:29 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.337 18:15:29 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:11:36.337 18:15:29 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.337 18:15:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:36.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.337 --rc genhtml_branch_coverage=1 00:11:36.337 --rc genhtml_function_coverage=1 00:11:36.337 --rc genhtml_legend=1 00:11:36.337 --rc geninfo_all_blocks=1 00:11:36.337 --rc geninfo_unexecuted_blocks=1 00:11:36.337 00:11:36.337 ' 00:11:36.337 18:15:29 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:36.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.337 --rc genhtml_branch_coverage=1 00:11:36.337 --rc genhtml_function_coverage=1 00:11:36.337 --rc genhtml_legend=1 00:11:36.337 --rc geninfo_all_blocks=1 00:11:36.337 --rc geninfo_unexecuted_blocks=1 00:11:36.337 00:11:36.337 ' 00:11:36.337 18:15:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:36.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.337 --rc genhtml_branch_coverage=1 00:11:36.337 --rc genhtml_function_coverage=1 00:11:36.337 --rc genhtml_legend=1 00:11:36.337 --rc geninfo_all_blocks=1 00:11:36.337 --rc geninfo_unexecuted_blocks=1 00:11:36.337 00:11:36.337 ' 00:11:36.337 18:15:29 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:36.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.338 --rc genhtml_branch_coverage=1 00:11:36.338 --rc genhtml_function_coverage=1 00:11:36.338 --rc genhtml_legend=1 00:11:36.338 --rc geninfo_all_blocks=1 00:11:36.338 --rc geninfo_unexecuted_blocks=1 00:11:36.338 00:11:36.338 ' 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:36.338 18:15:29 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61293 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:36.338 18:15:29 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61293 00:11:36.338 18:15:29 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61293 ']' 00:11:36.338 18:15:29 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.338 18:15:29 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.338 18:15:29 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.338 18:15:29 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.338 18:15:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:36.338 [2024-11-26 18:15:29.604364] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:36.338 [2024-11-26 18:15:29.604478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61293 ] 00:11:36.597 [2024-11-26 18:15:29.778545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.597 [2024-11-26 18:15:29.902844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.530 18:15:30 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.530 18:15:30 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:11:37.530 18:15:30 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:11:37.530 18:15:30 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:11:37.530 18:15:30 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:11:37.530 18:15:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:37.530 18:15:30 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:37.788 18:15:30 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:37.788 18:15:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.788 18:15:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.047 18:15:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.047 18:15:31 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:11:38.047 18:15:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.047 18:15:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.047 18:15:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.047 18:15:31 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:11:38.047 18:15:31 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:11:38.047 18:15:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.047 18:15:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.047 18:15:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.048 18:15:31 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:11:38.048 18:15:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.048 18:15:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.048 18:15:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.048 18:15:31 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:38.048 18:15:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.048 18:15:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.048 18:15:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.048 18:15:31 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:11:38.048 18:15:31 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:11:38.048 18:15:31 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:11:38.048 18:15:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.048 18:15:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.048 18:15:31 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.048 18:15:31 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:11:38.048 18:15:31 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "cafe0ba1-4f2e-4103-acc7-5e51f3d373a0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cafe0ba1-4f2e-4103-acc7-5e51f3d373a0",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f50bd94f-d502-4bea-9456-d7b26d4f590b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f50bd94f-d502-4bea-9456-d7b26d4f590b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f0363356-ea9c-4f2b-ac83-c551a78fbd8c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f0363356-ea9c-4f2b-ac83-c551a78fbd8c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f3e4fe3f-f901-43a2-8f12-01c1a2eaf852"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f3e4fe3f-f901-43a2-8f12-01c1a2eaf852",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "91a2308c-de80-4c60-916e-0873cc453b64"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91a2308c-de80-4c60-916e-0873cc453b64",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7e83d230-9c0d-4e03-adbc-e3aaf94a1134"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7e83d230-9c0d-4e03-adbc-e3aaf94a1134",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:38.048 18:15:31 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:11:38.308 18:15:31 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:11:38.308 18:15:31 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:11:38.308 18:15:31 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:11:38.308 18:15:31 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61293 00:11:38.308 18:15:31 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61293 ']' 00:11:38.308 18:15:31 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61293 00:11:38.308 18:15:31 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:11:38.308 18:15:31 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.308 18:15:31 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61293 00:11:38.308 18:15:31 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.308 18:15:31 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.308 killing process with pid 61293 00:11:38.308 18:15:31 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61293' 00:11:38.308 18:15:31 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61293 00:11:38.308 18:15:31 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61293 00:11:40.839 18:15:33 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:40.839 18:15:33 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:40.839 18:15:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:40.839 18:15:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.839 18:15:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:40.839 ************************************ 00:11:40.839 START TEST bdev_hello_world 00:11:40.839 ************************************ 00:11:40.839 18:15:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:40.839 [2024-11-26 18:15:34.010847] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:40.839 [2024-11-26 18:15:34.010961] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61388 ] 00:11:41.097 [2024-11-26 18:15:34.188435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.097 [2024-11-26 18:15:34.317082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.665 [2024-11-26 18:15:34.993612] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:41.665 [2024-11-26 18:15:34.993675] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:41.665 [2024-11-26 18:15:34.993699] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:41.665 [2024-11-26 18:15:34.996435] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:41.665 [2024-11-26 18:15:34.996870] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:41.665 [2024-11-26 18:15:34.996896] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:41.665 [2024-11-26 18:15:34.997024] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:41.665 00:11:41.665 [2024-11-26 18:15:34.997048] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:43.040 00:11:43.040 real 0m2.279s 00:11:43.040 user 0m1.951s 00:11:43.040 sys 0m0.217s 00:11:43.040 18:15:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.040 18:15:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:43.040 ************************************ 00:11:43.040 END TEST bdev_hello_world 00:11:43.040 ************************************ 00:11:43.040 18:15:36 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:11:43.040 18:15:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.040 18:15:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.040 18:15:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:43.040 ************************************ 00:11:43.040 START TEST bdev_bounds 00:11:43.040 ************************************ 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61430 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61430' 00:11:43.040 Process bdevio pid: 61430 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61430 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61430 ']' 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.040 18:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:43.040 [2024-11-26 18:15:36.348344] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:43.040 [2024-11-26 18:15:36.348483] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61430 ] 00:11:43.298 [2024-11-26 18:15:36.524928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:43.557 [2024-11-26 18:15:36.661588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.557 [2024-11-26 18:15:36.661763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.557 [2024-11-26 18:15:36.661791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.123 18:15:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.123 18:15:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:11:44.123 18:15:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:44.381 I/O targets: 00:11:44.381 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:44.381 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:44.381 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:44.381 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:44.381 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:44.381 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:44.381 00:11:44.381 00:11:44.381 CUnit - A unit testing framework for C - Version 2.1-3 00:11:44.381 http://cunit.sourceforge.net/ 00:11:44.381 00:11:44.381 00:11:44.381 Suite: bdevio tests on: Nvme3n1 00:11:44.381 Test: blockdev write read block ...passed 00:11:44.381 Test: blockdev write zeroes read block ...passed 00:11:44.381 Test: blockdev write zeroes read no split ...passed 00:11:44.381 Test: blockdev write zeroes read split ...passed 00:11:44.381 Test: blockdev write zeroes read split partial ...passed 00:11:44.381 Test: blockdev reset ...[2024-11-26 18:15:37.580439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:44.381 [2024-11-26 18:15:37.584517] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:44.381 passed 00:11:44.381 Test: blockdev write read 8 blocks ...passed 00:11:44.381 Test: blockdev write read size > 128k ...passed 00:11:44.381 Test: blockdev write read invalid size ...passed 00:11:44.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:44.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:44.381 Test: blockdev write read max offset ...passed 00:11:44.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:44.381 Test: blockdev writev readv 8 blocks ...passed 00:11:44.381 Test: blockdev writev readv 30 x 1block ...passed 00:11:44.381 Test: blockdev writev readv block ...passed 00:11:44.381 Test: blockdev writev readv size > 128k ...passed 00:11:44.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:44.381 Test: blockdev comparev and writev ...[2024-11-26 18:15:37.592509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b260a000 len:0x1000 00:11:44.381 [2024-11-26 18:15:37.592561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:44.381 passed 00:11:44.381 Test: blockdev nvme passthru rw ...passed 00:11:44.381 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:15:37.593168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:44.381 [2024-11-26 18:15:37.593217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:44.381 passed 00:11:44.381 Test: blockdev nvme admin passthru ...passed 00:11:44.381 Test: blockdev copy ...passed 00:11:44.381 Suite: bdevio tests on: Nvme2n3 00:11:44.381 Test: blockdev write read block ...passed 00:11:44.381 Test: blockdev write zeroes read block ...passed 00:11:44.381 Test: blockdev write zeroes read no split ...passed 00:11:44.381 Test: blockdev write zeroes read split ...passed 00:11:44.381 Test: blockdev write zeroes read split partial ...passed 00:11:44.381 Test: blockdev reset ...[2024-11-26 18:15:37.681438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:44.381 [2024-11-26 18:15:37.685908] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:44.381 passed 00:11:44.381 Test: blockdev write read 8 blocks ...passed 00:11:44.381 Test: blockdev write read size > 128k ...passed 00:11:44.381 Test: blockdev write read invalid size ...passed 00:11:44.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:44.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:44.381 Test: blockdev write read max offset ...passed 00:11:44.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:44.381 Test: blockdev writev readv 8 blocks ...passed 00:11:44.381 Test: blockdev writev readv 30 x 1block ...passed 00:11:44.381 Test: blockdev writev readv block ...passed 00:11:44.381 Test: blockdev writev readv size > 128k ...passed 00:11:44.381 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:44.381 Test: blockdev comparev and writev ...[2024-11-26 18:15:37.693522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295006000 len:0x1000 00:11:44.381 [2024-11-26 18:15:37.693567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:44.381 passed 00:11:44.381 Test: blockdev nvme passthru rw ...passed 00:11:44.381 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:15:37.694150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:44.381 [2024-11-26 18:15:37.694182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:44.381 passed 00:11:44.381 Test: blockdev nvme admin passthru ...passed 00:11:44.381 Test: blockdev copy ...passed 00:11:44.381 Suite: bdevio tests on: Nvme2n2 00:11:44.381 Test: blockdev write read block ...passed 00:11:44.381 Test: blockdev write zeroes read block ...passed 00:11:44.381 Test: blockdev write zeroes read no split ...passed 00:11:44.640 Test: blockdev write zeroes read split ...passed 00:11:44.640 Test: blockdev write zeroes read split partial ...passed 00:11:44.640 Test: blockdev reset ...[2024-11-26 18:15:37.778446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:44.640 [2024-11-26 18:15:37.782835] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:44.640 passed 00:11:44.640 Test: blockdev write read 8 blocks ...passed 00:11:44.640 Test: blockdev write read size > 128k ...passed 00:11:44.640 Test: blockdev write read invalid size ...passed 00:11:44.640 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:44.640 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:44.640 Test: blockdev write read max offset ...passed 00:11:44.640 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:44.640 Test: blockdev writev readv 8 blocks ...passed 00:11:44.640 Test: blockdev writev readv 30 x 1block ...passed 00:11:44.640 Test: blockdev writev readv block ...passed 00:11:44.640 Test: blockdev writev readv size > 128k ...passed 00:11:44.640 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:44.640 Test: blockdev comparev and writev ...[2024-11-26 18:15:37.790842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c263c000 len:0x1000 00:11:44.640 [2024-11-26 18:15:37.790889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:44.640 passed 00:11:44.640 Test: blockdev nvme passthru rw ...passed 00:11:44.640 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:15:37.791604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:44.640 [2024-11-26 18:15:37.791647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:44.640 passed 00:11:44.640 Test: blockdev nvme admin passthru ...passed 00:11:44.640 Test: blockdev copy ...passed 00:11:44.640 Suite: bdevio tests on: Nvme2n1 00:11:44.640 Test: blockdev write read block ...passed 00:11:44.640 Test: blockdev write zeroes read block ...passed 00:11:44.640 Test: blockdev write zeroes read no split ...passed 00:11:44.640 Test: blockdev write zeroes read split ...passed 00:11:44.640 Test: blockdev write zeroes read split partial ...passed 00:11:44.640 Test: blockdev reset ...[2024-11-26 18:15:37.880382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:44.640 [2024-11-26 18:15:37.884527] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:44.640 passed 00:11:44.640 Test: blockdev write read 8 blocks ...passed 00:11:44.640 Test: blockdev write read size > 128k ...passed 00:11:44.640 Test: blockdev write read invalid size ...passed 00:11:44.640 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:44.640 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:44.640 Test: blockdev write read max offset ...passed 00:11:44.640 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:44.640 Test: blockdev writev readv 8 blocks ...passed 00:11:44.640 Test: blockdev writev readv 30 x 1block ...passed 00:11:44.640 Test: blockdev writev readv block ...passed 00:11:44.640 Test: blockdev writev readv size > 128k ...passed 00:11:44.640 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:44.640 Test: blockdev comparev and writev ...[2024-11-26 18:15:37.892914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2638000 len:0x1000 00:11:44.640 [2024-11-26 18:15:37.892964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:44.640 passed 00:11:44.640 Test: blockdev nvme passthru rw ...passed 00:11:44.640 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:15:37.893626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:44.640 [2024-11-26 18:15:37.893659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:44.640 passed 00:11:44.640 Test: blockdev nvme admin passthru ...passed 00:11:44.640 Test: blockdev copy ...passed 00:11:44.640 Suite: bdevio tests on: Nvme1n1 00:11:44.640 Test: blockdev write read block ...passed 00:11:44.640 Test: blockdev write zeroes read block ...passed 00:11:44.640 Test: blockdev write zeroes read no split ...passed 00:11:44.640 Test: blockdev write zeroes read split ...passed 00:11:44.899 Test: blockdev write zeroes read split partial ...passed 00:11:44.899 Test: blockdev reset ...[2024-11-26 18:15:37.982571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:44.899 passed 00:11:44.899 Test: blockdev write read 8 blocks ...[2024-11-26 18:15:37.986376] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:44.899 passed 00:11:44.899 Test: blockdev write read size > 128k ...passed 00:11:44.899 Test: blockdev write read invalid size ...passed 00:11:44.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:44.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:44.899 Test: blockdev write read max offset ...passed 00:11:44.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:44.899 Test: blockdev writev readv 8 blocks ...passed 00:11:44.899 Test: blockdev writev readv 30 x 1block ...passed 00:11:44.899 Test: blockdev writev readv block ...passed 00:11:44.899 Test: blockdev writev readv size > 128k ...passed 00:11:44.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:44.899 Test: blockdev comparev and writev ...[2024-11-26 18:15:37.994359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2634000 len:0x1000 00:11:44.899 [2024-11-26 18:15:37.994405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:44.899 passed 00:11:44.899 Test: blockdev nvme passthru rw ...passed 00:11:44.899 Test: blockdev nvme passthru vendor specific ...passed 00:11:44.899 Test: blockdev nvme admin passthru ...[2024-11-26 18:15:37.995191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:44.899 [2024-11-26 18:15:37.995228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:44.899 passed 00:11:44.899 Test: blockdev copy ...passed 00:11:44.899 Suite: bdevio tests on: Nvme0n1 00:11:44.899 Test: blockdev write read block ...passed 00:11:44.899 Test: blockdev write zeroes read block ...passed 00:11:44.899 Test: blockdev write zeroes read no split ...passed 00:11:44.899 Test: blockdev write zeroes read split ...passed 00:11:44.899 Test: blockdev write zeroes read split partial ...passed 00:11:44.899 Test: blockdev reset ...[2024-11-26 18:15:38.081940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:44.899 [2024-11-26 18:15:38.085768] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:44.899 passed 00:11:44.899 Test: blockdev write read 8 blocks ...passed 00:11:44.899 Test: blockdev write read size > 128k ...passed 00:11:44.899 Test: blockdev write read invalid size ...passed 00:11:44.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:44.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:44.899 Test: blockdev write read max offset ...passed 00:11:44.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:44.899 Test: blockdev writev readv 8 blocks ...passed 00:11:44.899 Test: blockdev writev readv 30 x 1block ...passed 00:11:44.899 Test: blockdev writev readv block ...passed 00:11:44.899 Test: blockdev writev readv size > 128k ...passed 00:11:44.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:44.899 Test: blockdev comparev and writev ...passed 00:11:44.899 Test: blockdev nvme passthru rw ...[2024-11-26 18:15:38.093697] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:44.899 separate metadata which is not supported yet. 00:11:44.899 passed 00:11:44.899 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:15:38.094130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:11:44.899 Test: blockdev nvme admin passthru ...RP2 0x0 00:11:44.899 [2024-11-26 18:15:38.094235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:44.899 passed 00:11:44.899 Test: blockdev copy ...passed 00:11:44.899 00:11:44.899 Run Summary: Type Total Ran Passed Failed Inactive 00:11:44.899 suites 6 6 n/a 0 0 00:11:44.899 tests 138 138 138 0 0 00:11:44.899 asserts 893 893 893 0 n/a 00:11:44.899 00:11:44.899 Elapsed time = 1.627 seconds 00:11:44.899 0 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61430 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61430 ']' 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61430 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61430 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61430' 00:11:44.899 killing process with pid 61430 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61430 00:11:44.899 18:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61430 00:11:46.275 18:15:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:46.275 00:11:46.275 real 0m3.042s 00:11:46.275 user 0m7.896s 00:11:46.275 sys 0m0.383s 00:11:46.275 18:15:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.275 18:15:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:46.275 ************************************ 00:11:46.275 END TEST bdev_bounds 00:11:46.275 ************************************ 00:11:46.275 18:15:39 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:46.275 18:15:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:46.275 18:15:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.275 18:15:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.275 ************************************ 00:11:46.275 START TEST bdev_nbd 00:11:46.275 ************************************ 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61501 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61501 /var/tmp/spdk-nbd.sock 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61501 ']' 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:46.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.275 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:46.275 [2024-11-26 18:15:39.458412] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:46.275 [2024-11-26 18:15:39.458669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.533 [2024-11-26 18:15:39.637059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.533 [2024-11-26 18:15:39.763544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.465 1+0 records in 00:11:47.465 1+0 records out 00:11:47.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062198 s, 6.6 MB/s 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:47.465 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:11:47.723 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.723 1+0 records in 00:11:47.723 1+0 records out 00:11:47.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740996 s, 5.5 MB/s 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:47.723 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.981 1+0 records in 00:11:47.981 1+0 records out 00:11:47.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000842044 s, 4.9 MB/s 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:47.981 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.238 1+0 records in 00:11:48.238 1+0 records out 00:11:48.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790874 s, 5.2 MB/s 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:48.238 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:48.494 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:48.494 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.751 1+0 records in 00:11:48.751 1+0 records out 00:11:48.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755543 s, 5.4 MB/s 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:48.751 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:48.751 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:49.008 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:49.008 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:49.008 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.009 1+0 records in 00:11:49.009 1+0 records out 00:11:49.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700684 s, 5.8 MB/s 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd0", 00:11:49.009 "bdev_name": "Nvme0n1" 00:11:49.009 }, 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd1", 00:11:49.009 "bdev_name": "Nvme1n1" 00:11:49.009 }, 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd2", 00:11:49.009 "bdev_name": "Nvme2n1" 00:11:49.009 }, 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd3", 00:11:49.009 "bdev_name": "Nvme2n2" 00:11:49.009 }, 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd4", 00:11:49.009 "bdev_name": "Nvme2n3" 00:11:49.009 }, 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd5", 00:11:49.009 "bdev_name": "Nvme3n1" 00:11:49.009 } 00:11:49.009 ]' 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd0", 00:11:49.009 "bdev_name": "Nvme0n1" 00:11:49.009 }, 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd1", 00:11:49.009 "bdev_name": "Nvme1n1" 00:11:49.009 }, 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd2", 00:11:49.009 "bdev_name": "Nvme2n1" 00:11:49.009 }, 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd3", 00:11:49.009 "bdev_name": "Nvme2n2" 00:11:49.009 }, 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd4", 00:11:49.009 "bdev_name": "Nvme2n3" 00:11:49.009 }, 00:11:49.009 { 00:11:49.009 "nbd_device": "/dev/nbd5", 00:11:49.009 "bdev_name": "Nvme3n1" 00:11:49.009 } 00:11:49.009 ]' 00:11:49.009 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.267 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.525 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:49.782 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:49.782 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:49.782 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:49.782 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.782 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.782 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:49.782 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:49.782 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.782 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.782 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:50.041 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:50.041 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:50.041 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:50.041 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.041 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.041 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:50.042 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:50.042 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.042 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.042 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:50.301 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:50.301 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:50.301 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:50.301 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.301 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.301 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:50.301 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:50.301 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.301 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.301 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.566 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:50.846 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:50.846 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:50.846 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:50.846 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:51.105 /dev/nbd0 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.105 1+0 records in 00:11:51.105 1+0 records out 00:11:51.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567719 s, 7.2 MB/s 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:51.105 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:11:51.364 /dev/nbd1 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.364 1+0 records in 00:11:51.364 1+0 records out 00:11:51.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000965931 s, 4.2 MB/s 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:51.364 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:11:51.623 /dev/nbd10 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.623 1+0 records in 00:11:51.623 1+0 records out 00:11:51.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671504 s, 6.1 MB/s 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:51.623 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:11:51.882 /dev/nbd11 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:51.882 1+0 records in 00:11:51.882 1+0 records out 00:11:51.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644313 s, 6.4 MB/s 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:51.882 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:11:52.166 /dev/nbd12 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.166 1+0 records in 00:11:52.166 1+0 records out 00:11:52.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669988 s, 6.1 MB/s 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:52.166 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:11:52.426 /dev/nbd13 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.426 1+0 records in 00:11:52.426 1+0 records out 00:11:52.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595764 s, 6.9 MB/s 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.426 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd0", 00:11:52.683 "bdev_name": "Nvme0n1" 00:11:52.683 }, 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd1", 00:11:52.683 "bdev_name": "Nvme1n1" 00:11:52.683 }, 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd10", 00:11:52.683 "bdev_name": "Nvme2n1" 00:11:52.683 }, 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd11", 00:11:52.683 "bdev_name": "Nvme2n2" 00:11:52.683 }, 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd12", 00:11:52.683 "bdev_name": "Nvme2n3" 00:11:52.683 }, 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd13", 00:11:52.683 "bdev_name": "Nvme3n1" 00:11:52.683 } 00:11:52.683 ]' 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd0", 00:11:52.683 "bdev_name": "Nvme0n1" 00:11:52.683 }, 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd1", 00:11:52.683 "bdev_name": "Nvme1n1" 00:11:52.683 }, 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd10", 00:11:52.683 "bdev_name": "Nvme2n1" 00:11:52.683 }, 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd11", 00:11:52.683 "bdev_name": "Nvme2n2" 00:11:52.683 }, 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd12", 00:11:52.683 "bdev_name": "Nvme2n3" 00:11:52.683 }, 00:11:52.683 { 00:11:52.683 "nbd_device": "/dev/nbd13", 00:11:52.683 "bdev_name": "Nvme3n1" 00:11:52.683 } 00:11:52.683 ]' 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:52.683 /dev/nbd1 00:11:52.683 /dev/nbd10 00:11:52.683 /dev/nbd11 00:11:52.683 /dev/nbd12 00:11:52.683 /dev/nbd13' 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:52.683 /dev/nbd1 00:11:52.683 /dev/nbd10 00:11:52.683 /dev/nbd11 00:11:52.683 /dev/nbd12 00:11:52.683 /dev/nbd13' 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:52.683 256+0 records in 00:11:52.683 256+0 records out 00:11:52.683 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062041 s, 169 MB/s 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:52.683 18:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:52.941 256+0 records in 00:11:52.941 256+0 records out 00:11:52.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0797625 s, 13.1 MB/s 00:11:52.941 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:52.941 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:52.941 256+0 records in 00:11:52.941 256+0 records out 00:11:52.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102216 s, 10.3 MB/s 00:11:52.941 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:52.941 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:52.941 256+0 records in 00:11:52.941 256+0 records out 00:11:52.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104975 s, 10.0 MB/s 00:11:52.941 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:52.941 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:53.199 256+0 records in 00:11:53.199 256+0 records out 00:11:53.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103513 s, 10.1 MB/s 00:11:53.199 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:53.199 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:53.199 256+0 records in 00:11:53.199 256+0 records out 00:11:53.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106331 s, 9.9 MB/s 00:11:53.199 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:53.199 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:53.457 256+0 records in 00:11:53.457 256+0 records out 00:11:53.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0991421 s, 10.6 MB/s 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.457 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:53.723 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:53.723 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:53.723 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:53.723 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.723 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.723 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:53.723 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:53.723 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.723 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.723 18:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:53.979 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:53.979 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:53.979 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:53.979 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.979 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.979 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:53.979 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:53.979 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.979 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.979 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:54.235 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:54.235 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:54.235 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:54.235 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.235 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.235 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:54.235 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:54.235 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.235 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.236 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.492 18:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.749 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:55.007 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:55.265 malloc_lvol_verify 00:11:55.265 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:55.554 aff2b2ed-ecb6-4a66-b80e-694ed2c114ba 00:11:55.554 18:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:55.825 f94c87e6-5a20-49db-b3d8-97853a408e1b 00:11:55.825 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:56.083 /dev/nbd0 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:56.083 mke2fs 1.47.0 (5-Feb-2023) 00:11:56.083 Discarding device blocks: 0/4096 done 00:11:56.083 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:56.083 00:11:56.083 Allocating group tables: 0/1 done 00:11:56.083 Writing inode tables: 0/1 done 00:11:56.083 Creating journal (1024 blocks): done 00:11:56.083 Writing superblocks and filesystem accounting information: 0/1 done 00:11:56.083 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:56.083 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61501 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61501 ']' 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61501 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61501 00:11:56.341 killing process with pid 61501 00:11:56.341 18:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.342 18:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.342 18:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61501' 00:11:56.342 18:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61501 00:11:56.342 18:15:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61501 00:11:57.716 ************************************ 00:11:57.716 END TEST bdev_nbd 00:11:57.716 ************************************ 00:11:57.716 18:15:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:57.716 00:11:57.716 real 0m11.505s 00:11:57.716 user 0m15.531s 00:11:57.716 sys 0m4.222s 00:11:57.716 18:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.716 18:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:57.716 18:15:50 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:11:57.716 18:15:50 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:11:57.716 skipping fio tests on NVMe due to multi-ns failures. 00:11:57.716 18:15:50 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:57.716 18:15:50 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:57.716 18:15:50 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:57.716 18:15:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:57.716 18:15:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.716 18:15:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:57.716 ************************************ 00:11:57.716 START TEST bdev_verify 00:11:57.716 ************************************ 00:11:57.716 18:15:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:57.716 [2024-11-26 18:15:51.013514] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:11:57.716 [2024-11-26 18:15:51.013657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61891 ] 00:11:57.974 [2024-11-26 18:15:51.195336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:58.233 [2024-11-26 18:15:51.334078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.233 [2024-11-26 18:15:51.334128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.799 Running I/O for 5 seconds... 00:12:01.112 18752.00 IOPS, 73.25 MiB/s [2024-11-26T18:15:55.384Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-26T18:15:56.320Z] 18645.33 IOPS, 72.83 MiB/s [2024-11-26T18:15:57.256Z] 18928.00 IOPS, 73.94 MiB/s [2024-11-26T18:15:57.256Z] 19161.60 IOPS, 74.85 MiB/s 00:12:03.921 Latency(us) 00:12:03.921 [2024-11-26T18:15:57.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.921 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:03.921 Verification LBA range: start 0x0 length 0xbd0bd 00:12:03.921 Nvme0n1 : 5.05 1596.41 6.24 0.00 0.00 79946.84 17056.53 81505.03 00:12:03.922 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:03.922 Nvme0n1 : 5.04 1549.91 6.05 0.00 0.00 82316.91 15682.85 91578.69 00:12:03.922 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0x0 length 0xa0000 00:12:03.922 Nvme1n1 : 5.05 1595.94 6.23 0.00 0.00 79861.49 16369.69 75552.42 00:12:03.922 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0xa0000 length 0xa0000 00:12:03.922 Nvme1n1 : 5.04 1549.36 6.05 0.00 0.00 82157.16 19803.89 86541.86 00:12:03.922 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0x0 length 0x80000 00:12:03.922 Nvme2n1 : 5.05 1595.44 6.23 0.00 0.00 79644.32 15339.43 73720.85 00:12:03.922 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0x80000 length 0x80000 00:12:03.922 Nvme2n1 : 5.06 1555.42 6.08 0.00 0.00 81616.07 5294.39 82878.71 00:12:03.922 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0x0 length 0x80000 00:12:03.922 Nvme2n2 : 5.06 1594.97 6.23 0.00 0.00 79517.51 14538.12 75094.53 00:12:03.922 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0x80000 length 0x80000 00:12:03.922 Nvme2n2 : 5.08 1563.57 6.11 0.00 0.00 81192.12 11619.05 81962.93 00:12:03.922 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0x0 length 0x80000 00:12:03.922 Nvme2n3 : 5.07 1603.24 6.26 0.00 0.00 79000.73 5065.45 79215.57 00:12:03.922 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0x80000 length 0x80000 00:12:03.922 Nvme2n3 : 5.08 1563.16 6.11 0.00 0.00 81059.39 11161.15 88373.44 00:12:03.922 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0x0 length 0x20000 00:12:03.922 Nvme3n1 : 5.08 1611.98 6.30 0.00 0.00 78521.22 9730.24 82420.82 00:12:03.922 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:03.922 Verification LBA range: start 0x20000 length 0x20000 00:12:03.922 Nvme3n1 : 5.08 1562.74 6.10 0.00 0.00 80925.66 10817.73 92952.37 00:12:03.922 [2024-11-26T18:15:57.257Z] =================================================================================================================== 00:12:03.922 [2024-11-26T18:15:57.257Z] Total : 18942.16 73.99 0.00 0.00 80462.37 5065.45 92952.37 00:12:05.842 00:12:05.842 real 0m8.109s 00:12:05.842 user 0m14.998s 00:12:05.842 sys 0m0.303s 00:12:05.842 18:15:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.842 18:15:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:05.842 ************************************ 00:12:05.842 END TEST bdev_verify 00:12:05.842 ************************************ 00:12:05.842 18:15:59 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:05.842 18:15:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:05.842 18:15:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.842 18:15:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:05.842 ************************************ 00:12:05.842 START TEST bdev_verify_big_io 00:12:05.842 ************************************ 00:12:05.842 18:15:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:06.099 [2024-11-26 18:15:59.192404] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:12:06.099 [2024-11-26 18:15:59.192562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61999 ] 00:12:06.099 [2024-11-26 18:15:59.380135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:06.356 [2024-11-26 18:15:59.516128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.356 [2024-11-26 18:15:59.516190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.290 Running I/O for 5 seconds... 00:12:11.719 2033.00 IOPS, 127.06 MiB/s [2024-11-26T18:16:06.429Z] 2970.50 IOPS, 185.66 MiB/s [2024-11-26T18:16:06.429Z] 3230.67 IOPS, 201.92 MiB/s 00:12:13.094 Latency(us) 00:12:13.094 [2024-11-26T18:16:06.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.094 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0x0 length 0xbd0b 00:12:13.094 Nvme0n1 : 5.61 155.03 9.69 0.00 0.00 806869.31 28847.29 747282.11 00:12:13.094 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:13.094 Nvme0n1 : 5.70 171.44 10.72 0.00 0.00 654086.41 1209.12 1545848.29 00:12:13.094 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0x0 length 0xa000 00:12:13.094 Nvme1n1 : 5.61 155.70 9.73 0.00 0.00 786827.16 27702.55 703324.34 00:12:13.094 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0xa000 length 0xa000 00:12:13.094 Nvme1n1 : 5.57 149.28 9.33 0.00 0.00 832733.04 34113.06 754608.41 00:12:13.094 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0x0 length 0x8000 00:12:13.094 Nvme2n1 : 5.61 155.66 9.73 0.00 0.00 769805.86 29992.02 717976.93 00:12:13.094 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0x8000 length 0x8000 00:12:13.094 Nvme2n1 : 5.62 154.90 9.68 0.00 0.00 791489.46 29305.18 736292.67 00:12:13.094 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0x0 length 0x8000 00:12:13.094 Nvme2n2 : 5.62 159.47 9.97 0.00 0.00 738440.58 34570.96 758271.55 00:12:13.094 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0x8000 length 0x8000 00:12:13.094 Nvme2n2 : 5.63 155.34 9.71 0.00 0.00 771077.90 29534.13 736292.67 00:12:13.094 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0x0 length 0x8000 00:12:13.094 Nvme2n3 : 5.62 159.41 9.96 0.00 0.00 721364.11 35028.85 776587.29 00:12:13.094 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0x8000 length 0x8000 00:12:13.094 Nvme2n3 : 5.63 145.37 9.09 0.00 0.00 803739.03 29763.07 1509216.81 00:12:13.094 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0x0 length 0x2000 00:12:13.094 Nvme3n1 : 5.69 175.82 10.99 0.00 0.00 640182.72 1230.59 794903.03 00:12:13.094 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:13.094 Verification LBA range: start 0x2000 length 0x2000 00:12:13.094 Nvme3n1 : 5.63 149.93 9.37 0.00 0.00 762925.83 18086.79 1531195.70 00:12:13.094 [2024-11-26T18:16:06.429Z] =================================================================================================================== 00:12:13.094 [2024-11-26T18:16:06.429Z] Total : 1887.34 117.96 0.00 0.00 753536.76 1209.12 1545848.29 00:12:15.628 00:12:15.628 real 0m9.705s 00:12:15.628 user 0m18.135s 00:12:15.628 sys 0m0.340s 00:12:15.628 18:16:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.628 18:16:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.628 ************************************ 00:12:15.628 END TEST bdev_verify_big_io 00:12:15.628 ************************************ 00:12:15.628 18:16:08 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:15.628 18:16:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:15.628 18:16:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.628 18:16:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.628 ************************************ 00:12:15.628 START TEST bdev_write_zeroes 00:12:15.628 ************************************ 00:12:15.628 18:16:08 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:15.628 [2024-11-26 18:16:08.959688] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:12:15.628 [2024-11-26 18:16:08.959844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62120 ] 00:12:15.888 [2024-11-26 18:16:09.143854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.146 [2024-11-26 18:16:09.277328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.713 Running I/O for 1 seconds... 00:12:18.086 57984.00 IOPS, 226.50 MiB/s 00:12:18.086 Latency(us) 00:12:18.086 [2024-11-26T18:16:11.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.086 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:18.086 Nvme0n1 : 1.02 9621.84 37.59 0.00 0.00 13275.29 6553.60 26214.40 00:12:18.086 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:18.086 Nvme1n1 : 1.03 9610.71 37.54 0.00 0.00 13272.34 10646.02 23123.62 00:12:18.086 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:18.086 Nvme2n1 : 1.03 9597.87 37.49 0.00 0.00 13239.22 10474.31 23810.46 00:12:18.086 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:18.086 Nvme2n2 : 1.03 9587.81 37.45 0.00 0.00 13204.93 9215.11 23581.51 00:12:18.086 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:18.086 Nvme2n3 : 1.03 9577.70 37.41 0.00 0.00 13194.82 8871.69 23238.09 00:12:18.086 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:18.086 Nvme3n1 : 1.03 9566.93 37.37 0.00 0.00 13162.80 6668.07 23810.46 00:12:18.086 [2024-11-26T18:16:11.421Z] =================================================================================================================== 00:12:18.086 [2024-11-26T18:16:11.421Z] Total : 57562.85 224.85 0.00 0.00 13224.90 6553.60 26214.40 00:12:19.464 00:12:19.464 real 0m3.612s 00:12:19.464 user 0m3.212s 00:12:19.464 sys 0m0.281s 00:12:19.464 18:16:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.464 18:16:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:19.464 ************************************ 00:12:19.464 END TEST bdev_write_zeroes 00:12:19.464 ************************************ 00:12:19.464 18:16:12 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:19.464 18:16:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:19.464 18:16:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.464 18:16:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.464 ************************************ 00:12:19.464 START TEST bdev_json_nonenclosed 00:12:19.464 ************************************ 00:12:19.464 18:16:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:19.464 [2024-11-26 18:16:12.627257] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:12:19.464 [2024-11-26 18:16:12.627391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62179 ] 00:12:19.723 [2024-11-26 18:16:12.810767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.723 [2024-11-26 18:16:12.936768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.723 [2024-11-26 18:16:12.936885] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:19.723 [2024-11-26 18:16:12.936905] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:19.723 [2024-11-26 18:16:12.936917] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:19.983 00:12:19.983 real 0m0.701s 00:12:19.983 user 0m0.455s 00:12:19.983 sys 0m0.140s 00:12:19.983 18:16:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.983 ************************************ 00:12:19.983 END TEST bdev_json_nonenclosed 00:12:19.983 ************************************ 00:12:19.983 18:16:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:19.983 18:16:13 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:19.983 18:16:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:19.983 18:16:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.983 18:16:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.983 ************************************ 00:12:19.983 START TEST bdev_json_nonarray 00:12:19.983 ************************************ 00:12:19.983 18:16:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:20.242 [2024-11-26 18:16:13.384400] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:12:20.242 [2024-11-26 18:16:13.384586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62210 ] 00:12:20.242 [2024-11-26 18:16:13.560023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.501 [2024-11-26 18:16:13.679390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.501 [2024-11-26 18:16:13.679511] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:20.502 [2024-11-26 18:16:13.679530] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:20.502 [2024-11-26 18:16:13.679540] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:20.761 00:12:20.761 real 0m0.676s 00:12:20.761 user 0m0.441s 00:12:20.761 sys 0m0.129s 00:12:20.761 18:16:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.761 18:16:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:20.761 ************************************ 00:12:20.761 END TEST bdev_json_nonarray 00:12:20.761 ************************************ 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:12:20.761 18:16:14 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:12:20.761 00:12:20.761 real 0m44.768s 00:12:20.761 user 1m7.361s 00:12:20.761 sys 0m7.072s 00:12:20.761 18:16:14 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.761 18:16:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.761 ************************************ 00:12:20.761 END TEST blockdev_nvme 00:12:20.761 ************************************ 00:12:20.761 18:16:14 -- spdk/autotest.sh@209 -- # uname -s 00:12:20.761 18:16:14 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:12:20.761 18:16:14 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:20.761 18:16:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.761 18:16:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.761 18:16:14 -- common/autotest_common.sh@10 -- # set +x 00:12:21.021 ************************************ 00:12:21.021 START TEST blockdev_nvme_gpt 00:12:21.021 ************************************ 00:12:21.021 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:12:21.021 * Looking for test storage... 00:12:21.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:21.021 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:21.021 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:12:21.021 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:21.021 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.021 18:16:14 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:12:21.021 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.021 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.021 --rc genhtml_branch_coverage=1 00:12:21.021 --rc genhtml_function_coverage=1 00:12:21.021 --rc genhtml_legend=1 00:12:21.021 --rc geninfo_all_blocks=1 00:12:21.021 --rc geninfo_unexecuted_blocks=1 00:12:21.021 00:12:21.021 ' 00:12:21.021 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.021 --rc genhtml_branch_coverage=1 00:12:21.021 --rc genhtml_function_coverage=1 00:12:21.021 --rc genhtml_legend=1 00:12:21.021 --rc geninfo_all_blocks=1 00:12:21.021 --rc geninfo_unexecuted_blocks=1 00:12:21.021 00:12:21.021 ' 00:12:21.021 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.021 --rc genhtml_branch_coverage=1 00:12:21.021 --rc genhtml_function_coverage=1 00:12:21.021 --rc genhtml_legend=1 00:12:21.021 --rc geninfo_all_blocks=1 00:12:21.021 --rc geninfo_unexecuted_blocks=1 00:12:21.021 00:12:21.021 ' 00:12:21.021 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.021 --rc genhtml_branch_coverage=1 00:12:21.021 --rc genhtml_function_coverage=1 00:12:21.021 --rc genhtml_legend=1 00:12:21.021 --rc geninfo_all_blocks=1 00:12:21.021 --rc geninfo_unexecuted_blocks=1 00:12:21.021 00:12:21.021 ' 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:12:21.021 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62294 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:21.022 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62294 00:12:21.022 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62294 ']' 00:12:21.022 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.022 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.022 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.022 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.022 18:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:21.281 [2024-11-26 18:16:14.441568] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:12:21.281 [2024-11-26 18:16:14.441715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62294 ] 00:12:21.540 [2024-11-26 18:16:14.623132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.540 [2024-11-26 18:16:14.751210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.476 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.476 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:12:22.476 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:12:22.476 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:12:22.476 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:23.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:23.329 Waiting for block devices as requested 00:12:23.329 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.329 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.589 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.589 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:28.904 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:12:28.904 BYT; 00:12:28.904 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:12:28.904 BYT; 00:12:28.904 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:28.904 18:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:12:28.904 18:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:28.904 18:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:12:28.904 18:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:12:28.904 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:28.905 18:16:22 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:28.905 18:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:12:28.905 18:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:12:29.841 The operation has completed successfully. 00:12:29.841 18:16:23 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:12:30.771 The operation has completed successfully. 00:12:30.772 18:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:31.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:32.303 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.303 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.303 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.303 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:32.303 18:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:12:32.303 18:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.303 18:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:32.562 [] 00:12:32.562 18:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.562 18:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:12:32.562 18:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:12:32.562 18:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:32.562 18:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:32.562 18:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:32.562 18:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.562 18:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.823 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.823 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:12:32.823 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.823 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.823 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.823 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:12:32.823 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:12:32.823 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.823 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:33.082 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.082 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:12:33.082 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:12:33.083 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "892e2659-c728-4021-b3e7-38c137167362"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "892e2659-c728-4021-b3e7-38c137167362",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f146bef0-d1c2-440f-ad7e-4e145ea081a1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f146bef0-d1c2-440f-ad7e-4e145ea081a1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a8700301-1b49-489b-a32f-0e23d88b2b06"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8700301-1b49-489b-a32f-0e23d88b2b06",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ec2c71bd-554a-4276-80ac-f46657dbe3a0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ec2c71bd-554a-4276-80ac-f46657dbe3a0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "67b68be5-241f-487c-b5c5-8d3cfe492a28"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "67b68be5-241f-487c-b5c5-8d3cfe492a28",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:33.083 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:12:33.083 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:12:33.083 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:12:33.083 18:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62294 00:12:33.083 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62294 ']' 00:12:33.083 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62294 00:12:33.083 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:12:33.083 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.083 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62294 00:12:33.083 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.083 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.083 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62294' 00:12:33.083 killing process with pid 62294 00:12:33.083 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62294 00:12:33.083 18:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62294 00:12:35.616 18:16:28 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:35.616 18:16:28 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:35.616 18:16:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:35.616 18:16:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.616 18:16:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:35.616 ************************************ 00:12:35.616 START TEST bdev_hello_world 00:12:35.616 ************************************ 00:12:35.616 18:16:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:35.876 [2024-11-26 18:16:29.028535] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:12:35.876 [2024-11-26 18:16:29.028696] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62937 ] 00:12:35.876 [2024-11-26 18:16:29.205398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.136 [2024-11-26 18:16:29.332789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.704 [2024-11-26 18:16:30.023635] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:36.704 [2024-11-26 18:16:30.023708] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:36.704 [2024-11-26 18:16:30.023746] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:36.704 [2024-11-26 18:16:30.026927] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:36.704 [2024-11-26 18:16:30.027554] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:36.704 [2024-11-26 18:16:30.027589] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:36.704 [2024-11-26 18:16:30.027834] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:36.704 00:12:36.704 [2024-11-26 18:16:30.027858] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:38.083 00:12:38.083 real 0m2.279s 00:12:38.083 user 0m1.921s 00:12:38.083 sys 0m0.248s 00:12:38.083 ************************************ 00:12:38.083 END TEST bdev_hello_world 00:12:38.083 ************************************ 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:38.083 18:16:31 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:12:38.083 18:16:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:38.083 18:16:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.083 18:16:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:38.083 ************************************ 00:12:38.083 START TEST bdev_bounds 00:12:38.083 ************************************ 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62984 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:38.083 Process bdevio pid: 62984 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62984' 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62984 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62984 ']' 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.083 18:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:38.083 [2024-11-26 18:16:31.374799] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:12:38.083 [2024-11-26 18:16:31.374933] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62984 ] 00:12:38.342 [2024-11-26 18:16:31.550976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:38.600 [2024-11-26 18:16:31.678382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.600 [2024-11-26 18:16:31.678546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.600 [2024-11-26 18:16:31.678578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.170 18:16:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.170 18:16:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:39.170 18:16:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:39.170 I/O targets: 00:12:39.170 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:39.170 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:12:39.170 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:12:39.170 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:39.170 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:39.170 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:39.170 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:39.170 00:12:39.170 00:12:39.170 CUnit - A unit testing framework for C - Version 2.1-3 00:12:39.170 http://cunit.sourceforge.net/ 00:12:39.170 00:12:39.170 00:12:39.170 Suite: bdevio tests on: Nvme3n1 00:12:39.170 Test: blockdev write read block ...passed 00:12:39.170 Test: blockdev write zeroes read block ...passed 00:12:39.170 Test: blockdev write zeroes read no split ...passed 00:12:39.430 Test: blockdev write zeroes read split ...passed 00:12:39.430 Test: blockdev write zeroes read split partial ...passed 00:12:39.430 Test: blockdev reset ...[2024-11-26 18:16:32.560929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:12:39.430 [2024-11-26 18:16:32.564823] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:12:39.430 passed 00:12:39.430 Test: blockdev write read 8 blocks ...passed 00:12:39.430 Test: blockdev write read size > 128k ...passed 00:12:39.430 Test: blockdev write read invalid size ...passed 00:12:39.430 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.430 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.430 Test: blockdev write read max offset ...passed 00:12:39.430 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.430 Test: blockdev writev readv 8 blocks ...passed 00:12:39.430 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.430 Test: blockdev writev readv block ...passed 00:12:39.430 Test: blockdev writev readv size > 128k ...passed 00:12:39.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.430 Test: blockdev comparev and writev ...[2024-11-26 18:16:32.573095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2afe04000 len:0x1000 00:12:39.430 [2024-11-26 18:16:32.573148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:39.430 passed 00:12:39.430 Test: blockdev nvme passthru rw ...passed 00:12:39.430 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:16:32.573856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:39.430 [2024-11-26 18:16:32.573897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:39.430 passed 00:12:39.430 Test: blockdev nvme admin passthru ...passed 00:12:39.430 Test: blockdev copy ...passed 00:12:39.430 Suite: bdevio tests on: Nvme2n3 00:12:39.430 Test: blockdev write read block ...passed 00:12:39.430 Test: blockdev write zeroes read block ...passed 00:12:39.430 Test: blockdev write zeroes read no split ...passed 00:12:39.430 Test: blockdev write zeroes read split ...passed 00:12:39.430 Test: blockdev write zeroes read split partial ...passed 00:12:39.430 Test: blockdev reset ...[2024-11-26 18:16:32.660276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:39.430 [2024-11-26 18:16:32.664667] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:39.430 passed 00:12:39.430 Test: blockdev write read 8 blocks ...passed 00:12:39.430 Test: blockdev write read size > 128k ...passed 00:12:39.430 Test: blockdev write read invalid size ...passed 00:12:39.430 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.430 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.430 Test: blockdev write read max offset ...passed 00:12:39.430 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.430 Test: blockdev writev readv 8 blocks ...passed 00:12:39.430 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.430 Test: blockdev writev readv block ...passed 00:12:39.430 Test: blockdev writev readv size > 128k ...passed 00:12:39.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.430 Test: blockdev comparev and writev ...[2024-11-26 18:16:32.672974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2afe02000 len:0x1000 00:12:39.430 [2024-11-26 18:16:32.673026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:39.430 passed 00:12:39.430 Test: blockdev nvme passthru rw ...passed 00:12:39.430 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:16:32.673738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:39.430 [2024-11-26 18:16:32.673781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:39.430 passed 00:12:39.430 Test: blockdev nvme admin passthru ...passed 00:12:39.430 Test: blockdev copy ...passed 00:12:39.430 Suite: bdevio tests on: Nvme2n2 00:12:39.430 Test: blockdev write read block ...passed 00:12:39.430 Test: blockdev write zeroes read block ...passed 00:12:39.430 Test: blockdev write zeroes read no split ...passed 00:12:39.430 Test: blockdev write zeroes read split ...passed 00:12:39.430 Test: blockdev write zeroes read split partial ...passed 00:12:39.430 Test: blockdev reset ...[2024-11-26 18:16:32.760662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:39.708 [2024-11-26 18:16:32.765246] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:39.708 passed 00:12:39.708 Test: blockdev write read 8 blocks ...passed 00:12:39.708 Test: blockdev write read size > 128k ...passed 00:12:39.708 Test: blockdev write read invalid size ...passed 00:12:39.708 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.708 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.708 Test: blockdev write read max offset ...passed 00:12:39.708 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.708 Test: blockdev writev readv 8 blocks ...passed 00:12:39.708 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.708 Test: blockdev writev readv block ...passed 00:12:39.708 Test: blockdev writev readv size > 128k ...passed 00:12:39.708 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.708 Test: blockdev comparev and writev ...[2024-11-26 18:16:32.778729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3c38000 len:0x1000 00:12:39.709 [2024-11-26 18:16:32.778793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:39.709 passed 00:12:39.709 Test: blockdev nvme passthru rw ...passed 00:12:39.709 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:16:32.779477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:39.709 [2024-11-26 18:16:32.779513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:39.709 passed 00:12:39.709 Test: blockdev nvme admin passthru ...passed 00:12:39.709 Test: blockdev copy ...passed 00:12:39.709 Suite: bdevio tests on: Nvme2n1 00:12:39.709 Test: blockdev write read block ...passed 00:12:39.709 Test: blockdev write zeroes read block ...passed 00:12:39.709 Test: blockdev write zeroes read no split ...passed 00:12:39.709 Test: blockdev write zeroes read split ...passed 00:12:39.709 Test: blockdev write zeroes read split partial ...passed 00:12:39.709 Test: blockdev reset ...[2024-11-26 18:16:32.870034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:39.709 [2024-11-26 18:16:32.874628] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:39.709 passed 00:12:39.709 Test: blockdev write read 8 blocks ...passed 00:12:39.709 Test: blockdev write read size > 128k ...passed 00:12:39.709 Test: blockdev write read invalid size ...passed 00:12:39.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.709 Test: blockdev write read max offset ...passed 00:12:39.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.709 Test: blockdev writev readv 8 blocks ...passed 00:12:39.709 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.709 Test: blockdev writev readv block ...passed 00:12:39.709 Test: blockdev writev readv size > 128k ...passed 00:12:39.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.709 Test: blockdev comparev and writev ...[2024-11-26 18:16:32.882459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3c34000 len:0x1000 00:12:39.709 [2024-11-26 18:16:32.882523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:39.709 passed 00:12:39.709 Test: blockdev nvme passthru rw ...passed 00:12:39.709 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:16:32.883139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:39.709 [2024-11-26 18:16:32.883173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:39.709 passed 00:12:39.709 Test: blockdev nvme admin passthru ...passed 00:12:39.709 Test: blockdev copy ...passed 00:12:39.709 Suite: bdevio tests on: Nvme1n1p2 00:12:39.709 Test: blockdev write read block ...passed 00:12:39.709 Test: blockdev write zeroes read block ...passed 00:12:39.709 Test: blockdev write zeroes read no split ...passed 00:12:39.709 Test: blockdev write zeroes read split ...passed 00:12:39.709 Test: blockdev write zeroes read split partial ...passed 00:12:39.709 Test: blockdev reset ...[2024-11-26 18:16:32.969390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:39.709 [2024-11-26 18:16:32.973367] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:39.709 passed 00:12:39.709 Test: blockdev write read 8 blocks ...passed 00:12:39.709 Test: blockdev write read size > 128k ...passed 00:12:39.709 Test: blockdev write read invalid size ...passed 00:12:39.709 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.709 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.709 Test: blockdev write read max offset ...passed 00:12:39.709 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.709 Test: blockdev writev readv 8 blocks ...passed 00:12:39.709 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.709 Test: blockdev writev readv block ...passed 00:12:39.709 Test: blockdev writev readv size > 128k ...passed 00:12:39.709 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.709 Test: blockdev comparev and writev ...[2024-11-26 18:16:32.981558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c3c30000 len:0x1000 00:12:39.709 [2024-11-26 18:16:32.981613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:39.709 passed 00:12:39.709 Test: blockdev nvme passthru rw ...passed 00:12:39.709 Test: blockdev nvme passthru vendor specific ...passed 00:12:39.709 Test: blockdev nvme admin passthru ...passed 00:12:39.709 Test: blockdev copy ...passed 00:12:39.709 Suite: bdevio tests on: Nvme1n1p1 00:12:39.709 Test: blockdev write read block ...passed 00:12:39.709 Test: blockdev write zeroes read block ...passed 00:12:39.709 Test: blockdev write zeroes read no split ...passed 00:12:39.709 Test: blockdev write zeroes read split ...passed 00:12:39.968 Test: blockdev write zeroes read split partial ...passed 00:12:39.968 Test: blockdev reset ...[2024-11-26 18:16:33.057265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:39.968 [2024-11-26 18:16:33.061265] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:39.968 passed 00:12:39.968 Test: blockdev write read 8 blocks ...passed 00:12:39.968 Test: blockdev write read size > 128k ...passed 00:12:39.968 Test: blockdev write read invalid size ...passed 00:12:39.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.968 Test: blockdev write read max offset ...passed 00:12:39.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.968 Test: blockdev writev readv 8 blocks ...passed 00:12:39.968 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.968 Test: blockdev writev readv block ...passed 00:12:39.968 Test: blockdev writev readv size > 128k ...passed 00:12:39.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.968 Test: blockdev comparev and writev ...[2024-11-26 18:16:33.069660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b000e000 len:0x1000 00:12:39.968 [2024-11-26 18:16:33.069713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:39.968 passed 00:12:39.968 Test: blockdev nvme passthru rw ...passed 00:12:39.968 Test: blockdev nvme passthru vendor specific ...passed 00:12:39.968 Test: blockdev nvme admin passthru ...passed 00:12:39.968 Test: blockdev copy ...passed 00:12:39.968 Suite: bdevio tests on: Nvme0n1 00:12:39.968 Test: blockdev write read block ...passed 00:12:39.968 Test: blockdev write zeroes read block ...passed 00:12:39.968 Test: blockdev write zeroes read no split ...passed 00:12:39.968 Test: blockdev write zeroes read split ...passed 00:12:39.968 Test: blockdev write zeroes read split partial ...passed 00:12:39.968 Test: blockdev reset ...[2024-11-26 18:16:33.144464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:39.968 [2024-11-26 18:16:33.148392] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:39.968 passed 00:12:39.968 Test: blockdev write read 8 blocks ...passed 00:12:39.968 Test: blockdev write read size > 128k ...passed 00:12:39.968 Test: blockdev write read invalid size ...passed 00:12:39.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.968 Test: blockdev write read max offset ...passed 00:12:39.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.968 Test: blockdev writev readv 8 blocks ...passed 00:12:39.968 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.968 Test: blockdev writev readv block ...passed 00:12:39.968 Test: blockdev writev readv size > 128k ...passed 00:12:39.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.968 Test: blockdev comparev and writev ...[2024-11-26 18:16:33.155410] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:39.968 separate metadata which is not supported yet. 00:12:39.968 passed 00:12:39.968 Test: blockdev nvme passthru rw ...passed 00:12:39.968 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:16:33.155901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:39.968 [2024-11-26 18:16:33.155962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:12:39.968 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:12:39.968 passed 00:12:39.968 Test: blockdev copy ...passed 00:12:39.968 00:12:39.968 Run Summary: Type Total Ran Passed Failed Inactive 00:12:39.968 suites 7 7 n/a 0 0 00:12:39.968 tests 161 161 161 0 0 00:12:39.968 asserts 1025 1025 1025 0 n/a 00:12:39.968 00:12:39.968 Elapsed time = 1.865 seconds 00:12:39.968 0 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62984 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62984 ']' 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62984 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62984 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.968 killing process with pid 62984 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62984' 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62984 00:12:39.968 18:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62984 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:41.344 00:12:41.344 real 0m3.058s 00:12:41.344 user 0m7.937s 00:12:41.344 sys 0m0.380s 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:41.344 ************************************ 00:12:41.344 END TEST bdev_bounds 00:12:41.344 ************************************ 00:12:41.344 18:16:34 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:41.344 18:16:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:41.344 18:16:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.344 18:16:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:41.344 ************************************ 00:12:41.344 START TEST bdev_nbd 00:12:41.344 ************************************ 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63045 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63045 /var/tmp/spdk-nbd.sock 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63045 ']' 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.344 18:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:41.344 [2024-11-26 18:16:34.506379] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:12:41.344 [2024-11-26 18:16:34.506900] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.344 [2024-11-26 18:16:34.665079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.604 [2024-11-26 18:16:34.781673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:42.174 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.433 1+0 records in 00:12:42.433 1+0 records out 00:12:42.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581659 s, 7.0 MB/s 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:42.433 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.693 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.693 18:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:42.693 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.693 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:42.693 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:12:42.693 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:42.693 18:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.693 1+0 records in 00:12:42.693 1+0 records out 00:12:42.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454862 s, 9.0 MB/s 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:42.693 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.952 1+0 records in 00:12:42.952 1+0 records out 00:12:42.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563416 s, 7.3 MB/s 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:42.952 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:43.211 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:43.212 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.212 1+0 records in 00:12:43.212 1+0 records out 00:12:43.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651226 s, 6.3 MB/s 00:12:43.212 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.212 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:43.212 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.212 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:43.212 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:43.212 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.212 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:43.212 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.471 1+0 records in 00:12:43.471 1+0 records out 00:12:43.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751424 s, 5.5 MB/s 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:43.471 18:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.731 1+0 records in 00:12:43.731 1+0 records out 00:12:43.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050106 s, 8.2 MB/s 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:43.731 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.991 1+0 records in 00:12:43.991 1+0 records out 00:12:43.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551578 s, 7.4 MB/s 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:12:43.991 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:44.249 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:44.249 { 00:12:44.249 "nbd_device": "/dev/nbd0", 00:12:44.249 "bdev_name": "Nvme0n1" 00:12:44.249 }, 00:12:44.249 { 00:12:44.249 "nbd_device": "/dev/nbd1", 00:12:44.249 "bdev_name": "Nvme1n1p1" 00:12:44.249 }, 00:12:44.249 { 00:12:44.249 "nbd_device": "/dev/nbd2", 00:12:44.249 "bdev_name": "Nvme1n1p2" 00:12:44.249 }, 00:12:44.249 { 00:12:44.249 "nbd_device": "/dev/nbd3", 00:12:44.249 "bdev_name": "Nvme2n1" 00:12:44.249 }, 00:12:44.249 { 00:12:44.249 "nbd_device": "/dev/nbd4", 00:12:44.249 "bdev_name": "Nvme2n2" 00:12:44.249 }, 00:12:44.249 { 00:12:44.249 "nbd_device": "/dev/nbd5", 00:12:44.249 "bdev_name": "Nvme2n3" 00:12:44.249 }, 00:12:44.249 { 00:12:44.249 "nbd_device": "/dev/nbd6", 00:12:44.249 "bdev_name": "Nvme3n1" 00:12:44.249 } 00:12:44.249 ]' 00:12:44.249 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:44.249 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:44.250 { 00:12:44.250 "nbd_device": "/dev/nbd0", 00:12:44.250 "bdev_name": "Nvme0n1" 00:12:44.250 }, 00:12:44.250 { 00:12:44.250 "nbd_device": "/dev/nbd1", 00:12:44.250 "bdev_name": "Nvme1n1p1" 00:12:44.250 }, 00:12:44.250 { 00:12:44.250 "nbd_device": "/dev/nbd2", 00:12:44.250 "bdev_name": "Nvme1n1p2" 00:12:44.250 }, 00:12:44.250 { 00:12:44.250 "nbd_device": "/dev/nbd3", 00:12:44.250 "bdev_name": "Nvme2n1" 00:12:44.250 }, 00:12:44.250 { 00:12:44.250 "nbd_device": "/dev/nbd4", 00:12:44.250 "bdev_name": "Nvme2n2" 00:12:44.250 }, 00:12:44.250 { 00:12:44.250 "nbd_device": "/dev/nbd5", 00:12:44.250 "bdev_name": "Nvme2n3" 00:12:44.250 }, 00:12:44.250 { 00:12:44.250 "nbd_device": "/dev/nbd6", 00:12:44.250 "bdev_name": "Nvme3n1" 00:12:44.250 } 00:12:44.250 ]' 00:12:44.250 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:44.250 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:12:44.250 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.250 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:12:44.250 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:44.250 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:44.250 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.250 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:44.508 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:44.509 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:44.509 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:44.509 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.509 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.509 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:44.509 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.509 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.509 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.509 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:44.768 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:44.768 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:44.768 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:44.768 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.768 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.768 18:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:44.768 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.768 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.768 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.768 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:45.028 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:45.028 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:45.028 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:45.028 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.028 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.028 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:45.028 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.028 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.028 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.028 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:45.288 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:45.288 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:45.288 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:45.288 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.288 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.288 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:45.288 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.288 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.288 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.288 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:45.547 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:45.807 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:45.807 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:45.807 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.807 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.807 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:45.807 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.807 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.807 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.807 18:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:45.807 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:45.807 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:45.807 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:45.807 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.807 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.807 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:45.807 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.807 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.807 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:45.807 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.808 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:46.075 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:12:46.332 /dev/nbd0 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.332 1+0 records in 00:12:46.332 1+0 records out 00:12:46.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561725 s, 7.3 MB/s 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:46.332 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:12:46.590 /dev/nbd1 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.590 1+0 records in 00:12:46.590 1+0 records out 00:12:46.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582575 s, 7.0 MB/s 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:46.590 18:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:12:46.848 /dev/nbd10 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.848 1+0 records in 00:12:46.848 1+0 records out 00:12:46.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701307 s, 5.8 MB/s 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:46.848 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:12:47.107 /dev/nbd11 00:12:47.107 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:47.107 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:47.107 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:12:47.107 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:47.107 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:47.107 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:47.107 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:12:47.107 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:47.107 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:47.107 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:47.108 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.108 1+0 records in 00:12:47.108 1+0 records out 00:12:47.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648103 s, 6.3 MB/s 00:12:47.108 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.108 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:47.108 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.108 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:47.108 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:47.108 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.108 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:47.108 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:12:47.366 /dev/nbd12 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:47.366 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.624 1+0 records in 00:12:47.624 1+0 records out 00:12:47.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740808 s, 5.5 MB/s 00:12:47.624 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.624 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:47.624 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.624 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:47.624 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:47.624 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.624 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:47.624 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:12:47.883 /dev/nbd13 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.883 1+0 records in 00:12:47.883 1+0 records out 00:12:47.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518658 s, 7.9 MB/s 00:12:47.883 18:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.883 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:47.883 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.883 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:47.883 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:47.883 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.883 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:47.883 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:12:48.142 /dev/nbd14 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.142 1+0 records in 00:12:48.142 1+0 records out 00:12:48.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685316 s, 6.0 MB/s 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.142 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:48.400 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:48.400 { 00:12:48.401 "nbd_device": "/dev/nbd0", 00:12:48.401 "bdev_name": "Nvme0n1" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd1", 00:12:48.401 "bdev_name": "Nvme1n1p1" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd10", 00:12:48.401 "bdev_name": "Nvme1n1p2" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd11", 00:12:48.401 "bdev_name": "Nvme2n1" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd12", 00:12:48.401 "bdev_name": "Nvme2n2" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd13", 00:12:48.401 "bdev_name": "Nvme2n3" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd14", 00:12:48.401 "bdev_name": "Nvme3n1" 00:12:48.401 } 00:12:48.401 ]' 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd0", 00:12:48.401 "bdev_name": "Nvme0n1" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd1", 00:12:48.401 "bdev_name": "Nvme1n1p1" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd10", 00:12:48.401 "bdev_name": "Nvme1n1p2" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd11", 00:12:48.401 "bdev_name": "Nvme2n1" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd12", 00:12:48.401 "bdev_name": "Nvme2n2" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd13", 00:12:48.401 "bdev_name": "Nvme2n3" 00:12:48.401 }, 00:12:48.401 { 00:12:48.401 "nbd_device": "/dev/nbd14", 00:12:48.401 "bdev_name": "Nvme3n1" 00:12:48.401 } 00:12:48.401 ]' 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:48.401 /dev/nbd1 00:12:48.401 /dev/nbd10 00:12:48.401 /dev/nbd11 00:12:48.401 /dev/nbd12 00:12:48.401 /dev/nbd13 00:12:48.401 /dev/nbd14' 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:48.401 /dev/nbd1 00:12:48.401 /dev/nbd10 00:12:48.401 /dev/nbd11 00:12:48.401 /dev/nbd12 00:12:48.401 /dev/nbd13 00:12:48.401 /dev/nbd14' 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:48.401 256+0 records in 00:12:48.401 256+0 records out 00:12:48.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148654 s, 70.5 MB/s 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:48.401 256+0 records in 00:12:48.401 256+0 records out 00:12:48.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0871852 s, 12.0 MB/s 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.401 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:48.659 256+0 records in 00:12:48.659 256+0 records out 00:12:48.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0954771 s, 11.0 MB/s 00:12:48.659 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.659 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:48.659 256+0 records in 00:12:48.659 256+0 records out 00:12:48.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0919255 s, 11.4 MB/s 00:12:48.659 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.659 18:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:48.918 256+0 records in 00:12:48.918 256+0 records out 00:12:48.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10263 s, 10.2 MB/s 00:12:48.918 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.918 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:48.918 256+0 records in 00:12:48.918 256+0 records out 00:12:48.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0891486 s, 11.8 MB/s 00:12:48.918 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.918 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:48.918 256+0 records in 00:12:48.918 256+0 records out 00:12:48.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.088806 s, 11.8 MB/s 00:12:48.918 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.918 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:49.177 256+0 records in 00:12:49.177 256+0 records out 00:12:49.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0833706 s, 12.6 MB/s 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.178 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:49.437 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:49.437 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:49.437 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:49.437 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.437 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.437 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:49.437 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.437 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.437 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.437 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:49.696 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:49.696 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:49.696 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:49.696 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.696 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.696 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:49.696 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.696 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.696 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.696 18:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:49.956 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:49.956 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:49.956 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:49.957 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.957 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.957 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:49.957 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:49.957 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.957 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.957 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:50.221 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:50.221 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:50.221 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:50.221 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.221 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.221 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:50.221 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:50.221 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.221 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.221 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:50.480 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:50.480 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:50.480 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:50.480 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.480 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.480 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:50.480 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:50.480 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.480 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.480 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:50.739 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:50.739 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:50.739 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:50.739 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.739 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.739 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:50.739 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:50.739 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.739 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.739 18:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.997 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:51.255 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:51.255 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:51.255 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:51.255 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:51.255 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:51.255 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:51.255 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:51.255 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:51.255 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:51.256 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:51.256 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:51.256 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:51.256 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:51.256 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.256 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:51.256 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:51.515 malloc_lvol_verify 00:12:51.515 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:51.774 376c5e3b-3b30-4fb0-929c-51f78e24b653 00:12:51.774 18:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:52.033 d5ce924b-c716-4068-aa4d-cf2d165efb44 00:12:52.033 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:52.291 /dev/nbd0 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:52.291 mke2fs 1.47.0 (5-Feb-2023) 00:12:52.291 Discarding device blocks: 0/4096 done 00:12:52.291 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:52.291 00:12:52.291 Allocating group tables: 0/1 done 00:12:52.291 Writing inode tables: 0/1 done 00:12:52.291 Creating journal (1024 blocks): done 00:12:52.291 Writing superblocks and filesystem accounting information: 0/1 done 00:12:52.291 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.291 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63045 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63045 ']' 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63045 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63045 00:12:52.549 killing process with pid 63045 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63045' 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63045 00:12:52.549 18:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63045 00:12:54.451 ************************************ 00:12:54.451 END TEST bdev_nbd 00:12:54.451 ************************************ 00:12:54.451 18:16:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:54.451 00:12:54.451 real 0m12.875s 00:12:54.451 user 0m17.688s 00:12:54.451 sys 0m4.590s 00:12:54.451 18:16:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.451 18:16:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:54.451 18:16:47 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:12:54.451 18:16:47 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:12:54.451 18:16:47 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:12:54.451 skipping fio tests on NVMe due to multi-ns failures. 00:12:54.451 18:16:47 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:54.451 18:16:47 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:54.451 18:16:47 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:54.451 18:16:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:54.451 18:16:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.451 18:16:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:54.451 ************************************ 00:12:54.451 START TEST bdev_verify 00:12:54.451 ************************************ 00:12:54.451 18:16:47 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:54.451 [2024-11-26 18:16:47.437325] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:12:54.451 [2024-11-26 18:16:47.437489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63479 ] 00:12:54.451 [2024-11-26 18:16:47.617817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:54.451 [2024-11-26 18:16:47.749121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.451 [2024-11-26 18:16:47.749152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.411 Running I/O for 5 seconds... 00:12:57.724 17664.00 IOPS, 69.00 MiB/s [2024-11-26T18:16:51.995Z] 18208.00 IOPS, 71.12 MiB/s [2024-11-26T18:16:52.928Z] 18176.00 IOPS, 71.00 MiB/s [2024-11-26T18:16:53.875Z] 17969.25 IOPS, 70.19 MiB/s [2024-11-26T18:16:53.875Z] 16985.00 IOPS, 66.35 MiB/s 00:13:00.540 Latency(us) 00:13:00.540 [2024-11-26T18:16:53.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.540 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x0 length 0xbd0bd 00:13:00.540 Nvme0n1 : 5.07 1249.62 4.88 0.00 0.00 102126.22 20719.68 329683.28 00:13:00.540 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:00.540 Nvme0n1 : 5.07 1136.42 4.44 0.00 0.00 111871.47 16369.69 335178.01 00:13:00.540 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x0 length 0x4ff80 00:13:00.540 Nvme1n1p1 : 5.08 1246.88 4.87 0.00 0.00 101842.20 16369.69 320525.41 00:13:00.540 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x4ff80 length 0x4ff80 00:13:00.540 Nvme1n1p1 : 5.09 1156.89 4.52 0.00 0.00 110253.07 14595.35 320525.41 00:13:00.540 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x0 length 0x4ff7f 00:13:00.540 Nvme1n1p2 : 5.08 1246.26 4.87 0.00 0.00 101850.11 6009.85 333346.43 00:13:00.540 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:13:00.540 Nvme1n1p2 : 5.09 1156.51 4.52 0.00 0.00 110068.46 14480.88 318693.84 00:13:00.540 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x0 length 0x80000 00:13:00.540 Nvme2n1 : 5.09 1245.85 4.87 0.00 0.00 101847.99 8356.56 333346.43 00:13:00.540 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x80000 length 0x80000 00:13:00.540 Nvme2n1 : 5.09 1156.00 4.52 0.00 0.00 109922.47 8242.08 316862.27 00:13:00.540 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x0 length 0x80000 00:13:00.540 Nvme2n2 : 5.09 1245.44 4.87 0.00 0.00 101678.52 8184.85 335178.01 00:13:00.540 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x80000 length 0x80000 00:13:00.540 Nvme2n2 : 5.09 1143.05 4.47 0.00 0.00 110929.98 7297.68 315030.69 00:13:00.540 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x0 length 0x80000 00:13:00.540 Nvme2n3 : 5.09 1243.76 4.86 0.00 0.00 101542.39 3834.86 338841.15 00:13:00.540 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x80000 length 0x80000 00:13:00.540 Nvme2n3 : 5.10 1142.43 4.46 0.00 0.00 110743.80 16369.69 254588.76 00:13:00.540 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x0 length 0x20000 00:13:00.540 Nvme3n1 : 5.10 1242.70 4.85 0.00 0.00 101434.20 6296.03 342504.30 00:13:00.540 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:00.540 Verification LBA range: start 0x20000 length 0x20000 00:13:00.540 Nvme3n1 : 5.10 1142.03 4.46 0.00 0.00 110324.33 16369.69 254588.76 00:13:00.540 [2024-11-26T18:16:53.875Z] =================================================================================================================== 00:13:00.540 [2024-11-26T18:16:53.875Z] Total : 16753.85 65.44 0.00 0.00 105992.97 3834.86 342504.30 00:13:02.442 00:13:02.442 real 0m8.311s 00:13:02.442 user 0m15.413s 00:13:02.442 sys 0m0.293s 00:13:02.442 18:16:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.442 18:16:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:02.442 ************************************ 00:13:02.442 END TEST bdev_verify 00:13:02.442 ************************************ 00:13:02.442 18:16:55 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:02.442 18:16:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:02.442 18:16:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.442 18:16:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:02.442 ************************************ 00:13:02.442 START TEST bdev_verify_big_io 00:13:02.442 ************************************ 00:13:02.442 18:16:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:02.700 [2024-11-26 18:16:55.799159] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:13:02.700 [2024-11-26 18:16:55.799289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63588 ] 00:13:02.700 [2024-11-26 18:16:55.980792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:02.959 [2024-11-26 18:16:56.107381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.959 [2024-11-26 18:16:56.107416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.893 Running I/O for 5 seconds... 00:13:09.707 2340.00 IOPS, 146.25 MiB/s [2024-11-26T18:17:03.042Z] 3554.50 IOPS, 222.16 MiB/s [2024-11-26T18:17:03.042Z] 3804.33 IOPS, 237.77 MiB/s 00:13:09.707 Latency(us) 00:13:09.707 [2024-11-26T18:17:03.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.707 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x0 length 0xbd0b 00:13:09.707 Nvme0n1 : 5.67 129.78 8.11 0.00 0.00 942327.30 19346.00 937765.79 00:13:09.707 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:09.707 Nvme0n1 : 5.75 125.21 7.83 0.00 0.00 978973.97 29992.02 1399322.38 00:13:09.707 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x0 length 0x4ff8 00:13:09.707 Nvme1n1p1 : 5.67 135.35 8.46 0.00 0.00 897336.86 57923.52 798566.18 00:13:09.707 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x4ff8 length 0x4ff8 00:13:09.707 Nvme1n1p1 : 5.66 127.12 7.95 0.00 0.00 947582.99 46476.19 1428627.56 00:13:09.707 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x0 length 0x4ff7 00:13:09.707 Nvme1n1p2 : 5.75 131.36 8.21 0.00 0.00 893072.66 85626.08 1025681.33 00:13:09.707 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x4ff7 length 0x4ff7 00:13:09.707 Nvme1n1p2 : 5.76 130.46 8.15 0.00 0.00 901909.25 63189.30 1230817.59 00:13:09.707 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x0 length 0x8000 00:13:09.707 Nvme2n1 : 5.77 137.71 8.61 0.00 0.00 839940.66 71431.38 1033007.62 00:13:09.707 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x8000 length 0x8000 00:13:09.707 Nvme2n1 : 5.83 134.21 8.39 0.00 0.00 856847.59 70515.59 1472585.33 00:13:09.707 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x0 length 0x8000 00:13:09.707 Nvme2n2 : 5.77 143.89 8.99 0.00 0.00 789841.17 14194.70 1047660.21 00:13:09.707 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x8000 length 0x8000 00:13:09.707 Nvme2n2 : 5.89 145.69 9.11 0.00 0.00 770143.23 26901.24 824208.21 00:13:09.707 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x0 length 0x8000 00:13:09.707 Nvme2n3 : 5.80 148.98 9.31 0.00 0.00 745644.57 28389.39 1062312.80 00:13:09.707 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x8000 length 0x8000 00:13:09.707 Nvme2n3 : 5.89 143.26 8.95 0.00 0.00 766634.55 19689.42 1523869.40 00:13:09.707 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x0 length 0x2000 00:13:09.707 Nvme3n1 : 5.90 169.24 10.58 0.00 0.00 643031.32 3176.64 1076965.39 00:13:09.707 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:09.707 Verification LBA range: start 0x2000 length 0x2000 00:13:09.707 Nvme3n1 : 5.93 167.38 10.46 0.00 0.00 642678.44 4922.35 864502.83 00:13:09.707 [2024-11-26T18:17:03.042Z] =================================================================================================================== 00:13:09.707 [2024-11-26T18:17:03.042Z] Total : 1969.64 123.10 0.00 0.00 818905.34 3176.64 1523869.40 00:13:13.013 00:13:13.013 real 0m10.364s 00:13:13.013 user 0m19.505s 00:13:13.013 sys 0m0.351s 00:13:13.013 18:17:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.013 ************************************ 00:13:13.013 END TEST bdev_verify_big_io 00:13:13.013 ************************************ 00:13:13.013 18:17:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.013 18:17:06 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:13.013 18:17:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:13.013 18:17:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.013 18:17:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:13.013 ************************************ 00:13:13.013 START TEST bdev_write_zeroes 00:13:13.013 ************************************ 00:13:13.013 18:17:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:13.013 [2024-11-26 18:17:06.208877] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:13:13.013 [2024-11-26 18:17:06.208996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63720 ] 00:13:13.273 [2024-11-26 18:17:06.385933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.273 [2024-11-26 18:17:06.504262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.209 Running I/O for 1 seconds... 00:13:15.142 48711.00 IOPS, 190.28 MiB/s 00:13:15.142 Latency(us) 00:13:15.142 [2024-11-26T18:17:08.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.142 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.142 Nvme0n1 : 1.03 6789.47 26.52 0.00 0.00 18808.32 12706.54 187736.31 00:13:15.142 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.142 Nvme1n1p1 : 1.03 6961.80 27.19 0.00 0.00 18315.60 12821.02 127294.38 00:13:15.142 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.142 Nvme1n1p2 : 1.03 6953.49 27.16 0.00 0.00 18306.55 12821.02 108062.85 00:13:15.142 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.142 Nvme2n1 : 1.03 6945.82 27.13 0.00 0.00 18303.31 12763.78 107147.07 00:13:15.142 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.142 Nvme2n2 : 1.03 6939.04 27.11 0.00 0.00 18300.43 12992.73 108062.85 00:13:15.143 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.143 Nvme2n3 : 1.03 6931.74 27.08 0.00 0.00 18296.12 11447.34 111726.00 00:13:15.143 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:15.143 Nvme3n1 : 1.04 6985.67 27.29 0.00 0.00 18130.89 10130.89 102110.24 00:13:15.143 [2024-11-26T18:17:08.478Z] =================================================================================================================== 00:13:15.143 [2024-11-26T18:17:08.478Z] Total : 48507.04 189.48 0.00 0.00 18349.64 10130.89 187736.31 00:13:16.517 00:13:16.517 real 0m3.466s 00:13:16.517 user 0m3.124s 00:13:16.517 sys 0m0.225s 00:13:16.517 18:17:09 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.517 18:17:09 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:16.517 ************************************ 00:13:16.517 END TEST bdev_write_zeroes 00:13:16.517 ************************************ 00:13:16.517 18:17:09 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:16.517 18:17:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:16.517 18:17:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.517 18:17:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:16.517 ************************************ 00:13:16.517 START TEST bdev_json_nonenclosed 00:13:16.517 ************************************ 00:13:16.517 18:17:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:16.517 [2024-11-26 18:17:09.711497] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:13:16.517 [2024-11-26 18:17:09.712044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63773 ] 00:13:16.776 [2024-11-26 18:17:09.890199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.776 [2024-11-26 18:17:10.024456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.776 [2024-11-26 18:17:10.024554] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:16.776 [2024-11-26 18:17:10.024575] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:16.776 [2024-11-26 18:17:10.024587] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:17.034 00:13:17.034 real 0m0.705s 00:13:17.034 user 0m0.449s 00:13:17.034 sys 0m0.148s 00:13:17.034 18:17:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.034 18:17:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:17.034 ************************************ 00:13:17.034 END TEST bdev_json_nonenclosed 00:13:17.034 ************************************ 00:13:17.034 18:17:10 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:17.034 18:17:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:17.034 18:17:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.034 18:17:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:17.291 ************************************ 00:13:17.291 START TEST bdev_json_nonarray 00:13:17.291 ************************************ 00:13:17.291 18:17:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:17.291 [2024-11-26 18:17:10.468566] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:13:17.291 [2024-11-26 18:17:10.468739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63804 ] 00:13:17.549 [2024-11-26 18:17:10.641032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.549 [2024-11-26 18:17:10.786735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.549 [2024-11-26 18:17:10.786864] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:17.549 [2024-11-26 18:17:10.786894] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:17.549 [2024-11-26 18:17:10.786911] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:17.807 00:13:17.807 real 0m0.684s 00:13:17.807 user 0m0.446s 00:13:17.807 sys 0m0.132s 00:13:17.807 18:17:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.807 ************************************ 00:13:17.807 END TEST bdev_json_nonarray 00:13:17.807 ************************************ 00:13:17.807 18:17:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:17.807 18:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:13:17.807 18:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:13:17.807 18:17:11 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:13:17.807 18:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:17.807 18:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.808 18:17:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:17.808 ************************************ 00:13:17.808 START TEST bdev_gpt_uuid 00:13:17.808 ************************************ 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63835 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63835 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63835 ']' 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.808 18:17:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:18.065 [2024-11-26 18:17:11.236532] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:13:18.065 [2024-11-26 18:17:11.236714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63835 ] 00:13:18.321 [2024-11-26 18:17:11.414462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.321 [2024-11-26 18:17:11.532891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.292 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.292 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:13:19.292 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:19.292 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.292 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:19.601 Some configs were skipped because the RPC state that can call them passed over. 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:13:19.601 { 00:13:19.601 "name": "Nvme1n1p1", 00:13:19.601 "aliases": [ 00:13:19.601 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:13:19.601 ], 00:13:19.601 "product_name": "GPT Disk", 00:13:19.601 "block_size": 4096, 00:13:19.601 "num_blocks": 655104, 00:13:19.601 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:19.601 "assigned_rate_limits": { 00:13:19.601 "rw_ios_per_sec": 0, 00:13:19.601 "rw_mbytes_per_sec": 0, 00:13:19.601 "r_mbytes_per_sec": 0, 00:13:19.601 "w_mbytes_per_sec": 0 00:13:19.601 }, 00:13:19.601 "claimed": false, 00:13:19.601 "zoned": false, 00:13:19.601 "supported_io_types": { 00:13:19.601 "read": true, 00:13:19.601 "write": true, 00:13:19.601 "unmap": true, 00:13:19.601 "flush": true, 00:13:19.601 "reset": true, 00:13:19.601 "nvme_admin": false, 00:13:19.601 "nvme_io": false, 00:13:19.601 "nvme_io_md": false, 00:13:19.601 "write_zeroes": true, 00:13:19.601 "zcopy": false, 00:13:19.601 "get_zone_info": false, 00:13:19.601 "zone_management": false, 00:13:19.601 "zone_append": false, 00:13:19.601 "compare": true, 00:13:19.601 "compare_and_write": false, 00:13:19.601 "abort": true, 00:13:19.601 "seek_hole": false, 00:13:19.601 "seek_data": false, 00:13:19.601 "copy": true, 00:13:19.601 "nvme_iov_md": false 00:13:19.601 }, 00:13:19.601 "driver_specific": { 00:13:19.601 "gpt": { 00:13:19.601 "base_bdev": "Nvme1n1", 00:13:19.601 "offset_blocks": 256, 00:13:19.601 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:13:19.601 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:19.601 "partition_name": "SPDK_TEST_first" 00:13:19.601 } 00:13:19.601 } 00:13:19.601 } 00:13:19.601 ]' 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.601 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:19.859 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.859 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:13:19.859 { 00:13:19.859 "name": "Nvme1n1p2", 00:13:19.859 "aliases": [ 00:13:19.859 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:13:19.859 ], 00:13:19.859 "product_name": "GPT Disk", 00:13:19.859 "block_size": 4096, 00:13:19.859 "num_blocks": 655103, 00:13:19.859 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:19.859 "assigned_rate_limits": { 00:13:19.859 "rw_ios_per_sec": 0, 00:13:19.859 "rw_mbytes_per_sec": 0, 00:13:19.859 "r_mbytes_per_sec": 0, 00:13:19.859 "w_mbytes_per_sec": 0 00:13:19.859 }, 00:13:19.859 "claimed": false, 00:13:19.859 "zoned": false, 00:13:19.859 "supported_io_types": { 00:13:19.859 "read": true, 00:13:19.859 "write": true, 00:13:19.859 "unmap": true, 00:13:19.859 "flush": true, 00:13:19.859 "reset": true, 00:13:19.859 "nvme_admin": false, 00:13:19.859 "nvme_io": false, 00:13:19.859 "nvme_io_md": false, 00:13:19.859 "write_zeroes": true, 00:13:19.859 "zcopy": false, 00:13:19.859 "get_zone_info": false, 00:13:19.859 "zone_management": false, 00:13:19.859 "zone_append": false, 00:13:19.859 "compare": true, 00:13:19.859 "compare_and_write": false, 00:13:19.859 "abort": true, 00:13:19.859 "seek_hole": false, 00:13:19.859 "seek_data": false, 00:13:19.859 "copy": true, 00:13:19.859 "nvme_iov_md": false 00:13:19.859 }, 00:13:19.859 "driver_specific": { 00:13:19.859 "gpt": { 00:13:19.859 "base_bdev": "Nvme1n1", 00:13:19.859 "offset_blocks": 655360, 00:13:19.859 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:13:19.859 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:19.859 "partition_name": "SPDK_TEST_second" 00:13:19.859 } 00:13:19.859 } 00:13:19.859 } 00:13:19.859 ]' 00:13:19.859 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:13:19.859 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:13:19.859 18:17:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63835 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63835 ']' 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63835 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63835 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.859 killing process with pid 63835 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63835' 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63835 00:13:19.859 18:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63835 00:13:22.389 00:13:22.389 real 0m4.483s 00:13:22.389 user 0m4.583s 00:13:22.389 sys 0m0.555s 00:13:22.389 18:17:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.389 18:17:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:22.389 ************************************ 00:13:22.389 END TEST bdev_gpt_uuid 00:13:22.389 ************************************ 00:13:22.389 18:17:15 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:13:22.389 18:17:15 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:13:22.389 18:17:15 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:13:22.389 18:17:15 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:22.389 18:17:15 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:22.389 18:17:15 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:13:22.389 18:17:15 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:13:22.389 18:17:15 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:13:22.389 18:17:15 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:22.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:23.213 Waiting for block devices as requested 00:13:23.213 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.213 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.470 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.470 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:28.737 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:28.737 18:17:21 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:13:28.737 18:17:21 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:13:28.995 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:28.995 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:28.995 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:28.995 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:28.995 18:17:22 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:13:28.995 00:13:28.995 real 1m7.994s 00:13:28.995 user 1m27.296s 00:13:28.995 sys 0m11.062s 00:13:28.995 ************************************ 00:13:28.995 END TEST blockdev_nvme_gpt 00:13:28.995 ************************************ 00:13:28.995 18:17:22 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.995 18:17:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:28.995 18:17:22 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:28.995 18:17:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:28.995 18:17:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.995 18:17:22 -- common/autotest_common.sh@10 -- # set +x 00:13:28.995 ************************************ 00:13:28.995 START TEST nvme 00:13:28.995 ************************************ 00:13:28.995 18:17:22 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:28.995 * Looking for test storage... 00:13:28.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:28.995 18:17:22 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:28.995 18:17:22 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:28.995 18:17:22 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:29.254 18:17:22 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:29.254 18:17:22 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.254 18:17:22 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.254 18:17:22 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.254 18:17:22 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.254 18:17:22 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.254 18:17:22 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.254 18:17:22 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.254 18:17:22 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.254 18:17:22 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.254 18:17:22 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.254 18:17:22 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.254 18:17:22 nvme -- scripts/common.sh@344 -- # case "$op" in 00:13:29.254 18:17:22 nvme -- scripts/common.sh@345 -- # : 1 00:13:29.254 18:17:22 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.254 18:17:22 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.254 18:17:22 nvme -- scripts/common.sh@365 -- # decimal 1 00:13:29.254 18:17:22 nvme -- scripts/common.sh@353 -- # local d=1 00:13:29.254 18:17:22 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.254 18:17:22 nvme -- scripts/common.sh@355 -- # echo 1 00:13:29.254 18:17:22 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.254 18:17:22 nvme -- scripts/common.sh@366 -- # decimal 2 00:13:29.254 18:17:22 nvme -- scripts/common.sh@353 -- # local d=2 00:13:29.254 18:17:22 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.254 18:17:22 nvme -- scripts/common.sh@355 -- # echo 2 00:13:29.254 18:17:22 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.254 18:17:22 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.254 18:17:22 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.254 18:17:22 nvme -- scripts/common.sh@368 -- # return 0 00:13:29.254 18:17:22 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.254 18:17:22 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:29.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.254 --rc genhtml_branch_coverage=1 00:13:29.254 --rc genhtml_function_coverage=1 00:13:29.254 --rc genhtml_legend=1 00:13:29.254 --rc geninfo_all_blocks=1 00:13:29.254 --rc geninfo_unexecuted_blocks=1 00:13:29.254 00:13:29.254 ' 00:13:29.254 18:17:22 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:29.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.254 --rc genhtml_branch_coverage=1 00:13:29.254 --rc genhtml_function_coverage=1 00:13:29.254 --rc genhtml_legend=1 00:13:29.254 --rc geninfo_all_blocks=1 00:13:29.254 --rc geninfo_unexecuted_blocks=1 00:13:29.254 00:13:29.254 ' 00:13:29.254 18:17:22 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:29.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.254 --rc genhtml_branch_coverage=1 00:13:29.254 --rc genhtml_function_coverage=1 00:13:29.254 --rc genhtml_legend=1 00:13:29.254 --rc geninfo_all_blocks=1 00:13:29.254 --rc geninfo_unexecuted_blocks=1 00:13:29.254 00:13:29.254 ' 00:13:29.254 18:17:22 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:29.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.254 --rc genhtml_branch_coverage=1 00:13:29.254 --rc genhtml_function_coverage=1 00:13:29.254 --rc genhtml_legend=1 00:13:29.254 --rc geninfo_all_blocks=1 00:13:29.254 --rc geninfo_unexecuted_blocks=1 00:13:29.254 00:13:29.254 ' 00:13:29.254 18:17:22 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:29.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:30.757 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:30.757 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:30.757 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:30.757 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:30.757 18:17:23 nvme -- nvme/nvme.sh@79 -- # uname 00:13:30.757 18:17:23 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:13:30.757 18:17:23 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:13:30.757 18:17:23 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:13:30.757 18:17:23 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:13:30.757 18:17:23 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:13:30.757 18:17:23 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:13:30.757 18:17:23 nvme -- common/autotest_common.sh@1075 -- # stubpid=64492 00:13:30.757 18:17:23 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:13:30.757 18:17:23 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:13:30.757 Waiting for stub to ready for secondary processes... 00:13:30.757 18:17:23 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:30.757 18:17:23 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64492 ]] 00:13:30.757 18:17:23 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:13:30.757 [2024-11-26 18:17:23.975050] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:13:30.757 [2024-11-26 18:17:23.975205] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:13:31.693 18:17:24 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:31.693 18:17:24 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64492 ]] 00:13:31.693 18:17:24 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:13:31.693 [2024-11-26 18:17:24.978084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.951 [2024-11-26 18:17:25.089347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.951 [2024-11-26 18:17:25.089477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.951 [2024-11-26 18:17:25.089513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.951 [2024-11-26 18:17:25.106440] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:13:31.951 [2024-11-26 18:17:25.106474] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:31.951 [2024-11-26 18:17:25.122985] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:13:31.951 [2024-11-26 18:17:25.123134] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:13:31.951 [2024-11-26 18:17:25.130642] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:31.951 [2024-11-26 18:17:25.130894] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:13:31.951 [2024-11-26 18:17:25.130986] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:13:31.951 [2024-11-26 18:17:25.134003] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:31.951 [2024-11-26 18:17:25.134154] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:13:31.951 [2024-11-26 18:17:25.134212] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:13:31.951 [2024-11-26 18:17:25.136770] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:31.951 [2024-11-26 18:17:25.136935] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:13:31.951 [2024-11-26 18:17:25.137000] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:13:31.951 [2024-11-26 18:17:25.137065] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:13:31.951 [2024-11-26 18:17:25.137103] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:13:32.885 18:17:25 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:32.885 done. 00:13:32.886 18:17:25 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:13:32.886 18:17:25 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:32.886 18:17:25 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:13:32.886 18:17:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.886 18:17:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.886 ************************************ 00:13:32.886 START TEST nvme_reset 00:13:32.886 ************************************ 00:13:32.886 18:17:25 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:32.886 Initializing NVMe Controllers 00:13:32.886 Skipping QEMU NVMe SSD at 0000:00:10.0 00:13:32.886 Skipping QEMU NVMe SSD at 0000:00:11.0 00:13:32.886 Skipping QEMU NVMe SSD at 0000:00:13.0 00:13:32.886 Skipping QEMU NVMe SSD at 0000:00:12.0 00:13:32.886 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:13:32.886 00:13:32.886 real 0m0.264s 00:13:32.886 user 0m0.100s 00:13:32.886 sys 0m0.116s 00:13:32.886 18:17:26 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.886 18:17:26 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:13:32.886 ************************************ 00:13:32.886 END TEST nvme_reset 00:13:32.886 ************************************ 00:13:33.144 18:17:26 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:13:33.144 18:17:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.144 18:17:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.144 18:17:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.144 ************************************ 00:13:33.144 START TEST nvme_identify 00:13:33.144 ************************************ 00:13:33.144 18:17:26 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:13:33.144 18:17:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:13:33.144 18:17:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:13:33.144 18:17:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:13:33.144 18:17:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:13:33.144 18:17:26 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:33.144 18:17:26 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:13:33.144 18:17:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:33.144 18:17:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:33.144 18:17:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:33.144 18:17:26 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:33.144 18:17:26 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:33.144 18:17:26 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:13:33.405 [2024-11-26 18:17:26.601457] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64526 terminated unexpected 00:13:33.405 ===================================================== 00:13:33.405 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:33.405 ===================================================== 00:13:33.405 Controller Capabilities/Features 00:13:33.405 ================================ 00:13:33.405 Vendor ID: 1b36 00:13:33.405 Subsystem Vendor ID: 1af4 00:13:33.405 Serial Number: 12340 00:13:33.405 Model Number: QEMU NVMe Ctrl 00:13:33.405 Firmware Version: 8.0.0 00:13:33.405 Recommended Arb Burst: 6 00:13:33.405 IEEE OUI Identifier: 00 54 52 00:13:33.405 Multi-path I/O 00:13:33.405 May have multiple subsystem ports: No 00:13:33.405 May have multiple controllers: No 00:13:33.405 Associated with SR-IOV VF: No 00:13:33.405 Max Data Transfer Size: 524288 00:13:33.405 Max Number of Namespaces: 256 00:13:33.405 Max Number of I/O Queues: 64 00:13:33.405 NVMe Specification Version (VS): 1.4 00:13:33.405 NVMe Specification Version (Identify): 1.4 00:13:33.405 Maximum Queue Entries: 2048 00:13:33.405 Contiguous Queues Required: Yes 00:13:33.405 Arbitration Mechanisms Supported 00:13:33.405 Weighted Round Robin: Not Supported 00:13:33.405 Vendor Specific: Not Supported 00:13:33.405 Reset Timeout: 7500 ms 00:13:33.405 Doorbell Stride: 4 bytes 00:13:33.405 NVM Subsystem Reset: Not Supported 00:13:33.405 Command Sets Supported 00:13:33.405 NVM Command Set: Supported 00:13:33.405 Boot Partition: Not Supported 00:13:33.405 Memory Page Size Minimum: 4096 bytes 00:13:33.405 Memory Page Size Maximum: 65536 bytes 00:13:33.405 Persistent Memory Region: Not Supported 00:13:33.405 Optional Asynchronous Events Supported 00:13:33.405 Namespace Attribute Notices: Supported 00:13:33.405 Firmware Activation Notices: Not Supported 00:13:33.405 ANA Change Notices: Not Supported 00:13:33.405 PLE Aggregate Log Change Notices: Not Supported 00:13:33.405 LBA Status Info Alert Notices: Not Supported 00:13:33.405 EGE Aggregate Log Change Notices: Not Supported 00:13:33.405 Normal NVM Subsystem Shutdown event: Not Supported 00:13:33.405 Zone Descriptor Change Notices: Not Supported 00:13:33.405 Discovery Log Change Notices: Not Supported 00:13:33.405 Controller Attributes 00:13:33.405 128-bit Host Identifier: Not Supported 00:13:33.405 Non-Operational Permissive Mode: Not Supported 00:13:33.405 NVM Sets: Not Supported 00:13:33.405 Read Recovery Levels: Not Supported 00:13:33.405 Endurance Groups: Not Supported 00:13:33.405 Predictable Latency Mode: Not Supported 00:13:33.405 Traffic Based Keep ALive: Not Supported 00:13:33.405 Namespace Granularity: Not Supported 00:13:33.405 SQ Associations: Not Supported 00:13:33.405 UUID List: Not Supported 00:13:33.405 Multi-Domain Subsystem: Not Supported 00:13:33.405 Fixed Capacity Management: Not Supported 00:13:33.405 Variable Capacity Management: Not Supported 00:13:33.405 Delete Endurance Group: Not Supported 00:13:33.405 Delete NVM Set: Not Supported 00:13:33.405 Extended LBA Formats Supported: Supported 00:13:33.405 Flexible Data Placement Supported: Not Supported 00:13:33.405 00:13:33.405 Controller Memory Buffer Support 00:13:33.405 ================================ 00:13:33.405 Supported: No 00:13:33.405 00:13:33.405 Persistent Memory Region Support 00:13:33.405 ================================ 00:13:33.405 Supported: No 00:13:33.405 00:13:33.405 Admin Command Set Attributes 00:13:33.405 ============================ 00:13:33.405 Security Send/Receive: Not Supported 00:13:33.406 Format NVM: Supported 00:13:33.406 Firmware Activate/Download: Not Supported 00:13:33.406 Namespace Management: Supported 00:13:33.406 Device Self-Test: Not Supported 00:13:33.406 Directives: Supported 00:13:33.406 NVMe-MI: Not Supported 00:13:33.406 Virtualization Management: Not Supported 00:13:33.406 Doorbell Buffer Config: Supported 00:13:33.406 Get LBA Status Capability: Not Supported 00:13:33.406 Command & Feature Lockdown Capability: Not Supported 00:13:33.406 Abort Command Limit: 4 00:13:33.406 Async Event Request Limit: 4 00:13:33.406 Number of Firmware Slots: N/A 00:13:33.406 Firmware Slot 1 Read-Only: N/A 00:13:33.406 Firmware Activation Without Reset: N/A 00:13:33.406 Multiple Update Detection Support: N/A 00:13:33.406 Firmware Update Granularity: No Information Provided 00:13:33.406 Per-Namespace SMART Log: Yes 00:13:33.406 Asymmetric Namespace Access Log Page: Not Supported 00:13:33.406 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:33.406 Command Effects Log Page: Supported 00:13:33.406 Get Log Page Extended Data: Supported 00:13:33.406 Telemetry Log Pages: Not Supported 00:13:33.406 Persistent Event Log Pages: Not Supported 00:13:33.406 Supported Log Pages Log Page: May Support 00:13:33.406 Commands Supported & Effects Log Page: Not Supported 00:13:33.406 Feature Identifiers & Effects Log Page:May Support 00:13:33.406 NVMe-MI Commands & Effects Log Page: May Support 00:13:33.406 Data Area 4 for Telemetry Log: Not Supported 00:13:33.406 Error Log Page Entries Supported: 1 00:13:33.406 Keep Alive: Not Supported 00:13:33.406 00:13:33.406 NVM Command Set Attributes 00:13:33.406 ========================== 00:13:33.406 Submission Queue Entry Size 00:13:33.406 Max: 64 00:13:33.406 Min: 64 00:13:33.406 Completion Queue Entry Size 00:13:33.406 Max: 16 00:13:33.406 Min: 16 00:13:33.406 Number of Namespaces: 256 00:13:33.406 Compare Command: Supported 00:13:33.406 Write Uncorrectable Command: Not Supported 00:13:33.406 Dataset Management Command: Supported 00:13:33.406 Write Zeroes Command: Supported 00:13:33.406 Set Features Save Field: Supported 00:13:33.406 Reservations: Not Supported 00:13:33.406 Timestamp: Supported 00:13:33.406 Copy: Supported 00:13:33.406 Volatile Write Cache: Present 00:13:33.406 Atomic Write Unit (Normal): 1 00:13:33.406 Atomic Write Unit (PFail): 1 00:13:33.406 Atomic Compare & Write Unit: 1 00:13:33.406 Fused Compare & Write: Not Supported 00:13:33.406 Scatter-Gather List 00:13:33.406 SGL Command Set: Supported 00:13:33.406 SGL Keyed: Not Supported 00:13:33.406 SGL Bit Bucket Descriptor: Not Supported 00:13:33.406 SGL Metadata Pointer: Not Supported 00:13:33.406 Oversized SGL: Not Supported 00:13:33.406 SGL Metadata Address: Not Supported 00:13:33.406 SGL Offset: Not Supported 00:13:33.406 Transport SGL Data Block: Not Supported 00:13:33.406 Replay Protected Memory Block: Not Supported 00:13:33.406 00:13:33.406 Firmware Slot Information 00:13:33.406 ========================= 00:13:33.406 Active slot: 1 00:13:33.406 Slot 1 Firmware Revision: 1.0 00:13:33.406 00:13:33.406 00:13:33.406 Commands Supported and Effects 00:13:33.406 ============================== 00:13:33.406 Admin Commands 00:13:33.406 -------------- 00:13:33.406 Delete I/O Submission Queue (00h): Supported 00:13:33.406 Create I/O Submission Queue (01h): Supported 00:13:33.406 Get Log Page (02h): Supported 00:13:33.406 Delete I/O Completion Queue (04h): Supported 00:13:33.406 Create I/O Completion Queue (05h): Supported 00:13:33.406 Identify (06h): Supported 00:13:33.406 Abort (08h): Supported 00:13:33.406 Set Features (09h): Supported 00:13:33.406 Get Features (0Ah): Supported 00:13:33.406 Asynchronous Event Request (0Ch): Supported 00:13:33.406 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:33.406 Directive Send (19h): Supported 00:13:33.406 Directive Receive (1Ah): Supported 00:13:33.406 Virtualization Management (1Ch): Supported 00:13:33.406 Doorbell Buffer Config (7Ch): Supported 00:13:33.406 Format NVM (80h): Supported LBA-Change 00:13:33.406 I/O Commands 00:13:33.406 ------------ 00:13:33.406 Flush (00h): Supported LBA-Change 00:13:33.406 Write (01h): Supported LBA-Change 00:13:33.406 Read (02h): Supported 00:13:33.406 Compare (05h): Supported 00:13:33.406 Write Zeroes (08h): Supported LBA-Change 00:13:33.406 Dataset Management (09h): Supported LBA-Change 00:13:33.406 Unknown (0Ch): Supported 00:13:33.406 Unknown (12h): Supported 00:13:33.406 Copy (19h): Supported LBA-Change 00:13:33.406 Unknown (1Dh): Supported LBA-Change 00:13:33.406 00:13:33.406 Error Log 00:13:33.406 ========= 00:13:33.406 00:13:33.406 Arbitration 00:13:33.406 =========== 00:13:33.406 Arbitration Burst: no limit 00:13:33.406 00:13:33.406 Power Management 00:13:33.406 ================ 00:13:33.406 Number of Power States: 1 00:13:33.406 Current Power State: Power State #0 00:13:33.406 Power State #0: 00:13:33.406 Max Power: 25.00 W 00:13:33.406 Non-Operational State: Operational 00:13:33.406 Entry Latency: 16 microseconds 00:13:33.406 Exit Latency: 4 microseconds 00:13:33.406 Relative Read Throughput: 0 00:13:33.406 Relative Read Latency: 0 00:13:33.406 Relative Write Throughput: 0 00:13:33.406 Relative Write Latency: 0 00:13:33.406 Idle Power[2024-11-26 18:17:26.602536] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64526 terminated unexpected 00:13:33.406 : Not Reported 00:13:33.406 Active Power: Not Reported 00:13:33.406 Non-Operational Permissive Mode: Not Supported 00:13:33.406 00:13:33.406 Health Information 00:13:33.406 ================== 00:13:33.406 Critical Warnings: 00:13:33.406 Available Spare Space: OK 00:13:33.406 Temperature: OK 00:13:33.406 Device Reliability: OK 00:13:33.406 Read Only: No 00:13:33.406 Volatile Memory Backup: OK 00:13:33.406 Current Temperature: 323 Kelvin (50 Celsius) 00:13:33.406 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:33.406 Available Spare: 0% 00:13:33.406 Available Spare Threshold: 0% 00:13:33.406 Life Percentage Used: 0% 00:13:33.406 Data Units Read: 728 00:13:33.406 Data Units Written: 656 00:13:33.406 Host Read Commands: 32283 00:13:33.406 Host Write Commands: 32069 00:13:33.406 Controller Busy Time: 0 minutes 00:13:33.406 Power Cycles: 0 00:13:33.406 Power On Hours: 0 hours 00:13:33.406 Unsafe Shutdowns: 0 00:13:33.406 Unrecoverable Media Errors: 0 00:13:33.406 Lifetime Error Log Entries: 0 00:13:33.406 Warning Temperature Time: 0 minutes 00:13:33.406 Critical Temperature Time: 0 minutes 00:13:33.406 00:13:33.406 Number of Queues 00:13:33.406 ================ 00:13:33.406 Number of I/O Submission Queues: 64 00:13:33.406 Number of I/O Completion Queues: 64 00:13:33.406 00:13:33.406 ZNS Specific Controller Data 00:13:33.406 ============================ 00:13:33.406 Zone Append Size Limit: 0 00:13:33.406 00:13:33.406 00:13:33.406 Active Namespaces 00:13:33.406 ================= 00:13:33.406 Namespace ID:1 00:13:33.406 Error Recovery Timeout: Unlimited 00:13:33.406 Command Set Identifier: NVM (00h) 00:13:33.406 Deallocate: Supported 00:13:33.406 Deallocated/Unwritten Error: Supported 00:13:33.406 Deallocated Read Value: All 0x00 00:13:33.406 Deallocate in Write Zeroes: Not Supported 00:13:33.406 Deallocated Guard Field: 0xFFFF 00:13:33.406 Flush: Supported 00:13:33.406 Reservation: Not Supported 00:13:33.406 Metadata Transferred as: Separate Metadata Buffer 00:13:33.406 Namespace Sharing Capabilities: Private 00:13:33.406 Size (in LBAs): 1548666 (5GiB) 00:13:33.406 Capacity (in LBAs): 1548666 (5GiB) 00:13:33.406 Utilization (in LBAs): 1548666 (5GiB) 00:13:33.406 Thin Provisioning: Not Supported 00:13:33.406 Per-NS Atomic Units: No 00:13:33.406 Maximum Single Source Range Length: 128 00:13:33.406 Maximum Copy Length: 128 00:13:33.406 Maximum Source Range Count: 128 00:13:33.406 NGUID/EUI64 Never Reused: No 00:13:33.406 Namespace Write Protected: No 00:13:33.406 Number of LBA Formats: 8 00:13:33.406 Current LBA Format: LBA Format #07 00:13:33.406 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.406 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:33.406 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:33.406 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:33.406 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:33.406 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:33.406 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:33.406 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:33.406 00:13:33.406 NVM Specific Namespace Data 00:13:33.406 =========================== 00:13:33.406 Logical Block Storage Tag Mask: 0 00:13:33.406 Protection Information Capabilities: 00:13:33.406 16b Guard Protection Information Storage Tag Support: No 00:13:33.406 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:33.406 Storage Tag Check Read Support: No 00:13:33.407 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.407 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.407 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.407 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.407 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.407 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.407 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.407 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.407 ===================================================== 00:13:33.407 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:33.407 ===================================================== 00:13:33.407 Controller Capabilities/Features 00:13:33.407 ================================ 00:13:33.407 Vendor ID: 1b36 00:13:33.407 Subsystem Vendor ID: 1af4 00:13:33.407 Serial Number: 12341 00:13:33.407 Model Number: QEMU NVMe Ctrl 00:13:33.407 Firmware Version: 8.0.0 00:13:33.407 Recommended Arb Burst: 6 00:13:33.407 IEEE OUI Identifier: 00 54 52 00:13:33.407 Multi-path I/O 00:13:33.407 May have multiple subsystem ports: No 00:13:33.407 May have multiple controllers: No 00:13:33.407 Associated with SR-IOV VF: No 00:13:33.407 Max Data Transfer Size: 524288 00:13:33.407 Max Number of Namespaces: 256 00:13:33.407 Max Number of I/O Queues: 64 00:13:33.407 NVMe Specification Version (VS): 1.4 00:13:33.407 NVMe Specification Version (Identify): 1.4 00:13:33.407 Maximum Queue Entries: 2048 00:13:33.407 Contiguous Queues Required: Yes 00:13:33.407 Arbitration Mechanisms Supported 00:13:33.407 Weighted Round Robin: Not Supported 00:13:33.407 Vendor Specific: Not Supported 00:13:33.407 Reset Timeout: 7500 ms 00:13:33.407 Doorbell Stride: 4 bytes 00:13:33.407 NVM Subsystem Reset: Not Supported 00:13:33.407 Command Sets Supported 00:13:33.407 NVM Command Set: Supported 00:13:33.407 Boot Partition: Not Supported 00:13:33.407 Memory Page Size Minimum: 4096 bytes 00:13:33.407 Memory Page Size Maximum: 65536 bytes 00:13:33.407 Persistent Memory Region: Not Supported 00:13:33.407 Optional Asynchronous Events Supported 00:13:33.407 Namespace Attribute Notices: Supported 00:13:33.407 Firmware Activation Notices: Not Supported 00:13:33.407 ANA Change Notices: Not Supported 00:13:33.407 PLE Aggregate Log Change Notices: Not Supported 00:13:33.407 LBA Status Info Alert Notices: Not Supported 00:13:33.407 EGE Aggregate Log Change Notices: Not Supported 00:13:33.407 Normal NVM Subsystem Shutdown event: Not Supported 00:13:33.407 Zone Descriptor Change Notices: Not Supported 00:13:33.407 Discovery Log Change Notices: Not Supported 00:13:33.407 Controller Attributes 00:13:33.407 128-bit Host Identifier: Not Supported 00:13:33.407 Non-Operational Permissive Mode: Not Supported 00:13:33.407 NVM Sets: Not Supported 00:13:33.407 Read Recovery Levels: Not Supported 00:13:33.407 Endurance Groups: Not Supported 00:13:33.407 Predictable Latency Mode: Not Supported 00:13:33.407 Traffic Based Keep ALive: Not Supported 00:13:33.407 Namespace Granularity: Not Supported 00:13:33.407 SQ Associations: Not Supported 00:13:33.407 UUID List: Not Supported 00:13:33.407 Multi-Domain Subsystem: Not Supported 00:13:33.407 Fixed Capacity Management: Not Supported 00:13:33.407 Variable Capacity Management: Not Supported 00:13:33.407 Delete Endurance Group: Not Supported 00:13:33.407 Delete NVM Set: Not Supported 00:13:33.407 Extended LBA Formats Supported: Supported 00:13:33.407 Flexible Data Placement Supported: Not Supported 00:13:33.407 00:13:33.407 Controller Memory Buffer Support 00:13:33.407 ================================ 00:13:33.407 Supported: No 00:13:33.407 00:13:33.407 Persistent Memory Region Support 00:13:33.407 ================================ 00:13:33.407 Supported: No 00:13:33.407 00:13:33.407 Admin Command Set Attributes 00:13:33.407 ============================ 00:13:33.407 Security Send/Receive: Not Supported 00:13:33.407 Format NVM: Supported 00:13:33.407 Firmware Activate/Download: Not Supported 00:13:33.407 Namespace Management: Supported 00:13:33.407 Device Self-Test: Not Supported 00:13:33.407 Directives: Supported 00:13:33.407 NVMe-MI: Not Supported 00:13:33.407 Virtualization Management: Not Supported 00:13:33.407 Doorbell Buffer Config: Supported 00:13:33.407 Get LBA Status Capability: Not Supported 00:13:33.407 Command & Feature Lockdown Capability: Not Supported 00:13:33.407 Abort Command Limit: 4 00:13:33.407 Async Event Request Limit: 4 00:13:33.407 Number of Firmware Slots: N/A 00:13:33.407 Firmware Slot 1 Read-Only: N/A 00:13:33.407 Firmware Activation Without Reset: N/A 00:13:33.407 Multiple Update Detection Support: N/A 00:13:33.407 Firmware Update Granularity: No Information Provided 00:13:33.407 Per-Namespace SMART Log: Yes 00:13:33.407 Asymmetric Namespace Access Log Page: Not Supported 00:13:33.407 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:33.407 Command Effects Log Page: Supported 00:13:33.407 Get Log Page Extended Data: Supported 00:13:33.407 Telemetry Log Pages: Not Supported 00:13:33.407 Persistent Event Log Pages: Not Supported 00:13:33.407 Supported Log Pages Log Page: May Support 00:13:33.407 Commands Supported & Effects Log Page: Not Supported 00:13:33.407 Feature Identifiers & Effects Log Page:May Support 00:13:33.407 NVMe-MI Commands & Effects Log Page: May Support 00:13:33.407 Data Area 4 for Telemetry Log: Not Supported 00:13:33.407 Error Log Page Entries Supported: 1 00:13:33.407 Keep Alive: Not Supported 00:13:33.407 00:13:33.407 NVM Command Set Attributes 00:13:33.407 ========================== 00:13:33.407 Submission Queue Entry Size 00:13:33.407 Max: 64 00:13:33.407 Min: 64 00:13:33.407 Completion Queue Entry Size 00:13:33.407 Max: 16 00:13:33.407 Min: 16 00:13:33.407 Number of Namespaces: 256 00:13:33.407 Compare Command: Supported 00:13:33.407 Write Uncorrectable Command: Not Supported 00:13:33.407 Dataset Management Command: Supported 00:13:33.407 Write Zeroes Command: Supported 00:13:33.407 Set Features Save Field: Supported 00:13:33.407 Reservations: Not Supported 00:13:33.407 Timestamp: Supported 00:13:33.407 Copy: Supported 00:13:33.407 Volatile Write Cache: Present 00:13:33.407 Atomic Write Unit (Normal): 1 00:13:33.407 Atomic Write Unit (PFail): 1 00:13:33.407 Atomic Compare & Write Unit: 1 00:13:33.407 Fused Compare & Write: Not Supported 00:13:33.407 Scatter-Gather List 00:13:33.407 SGL Command Set: Supported 00:13:33.407 SGL Keyed: Not Supported 00:13:33.407 SGL Bit Bucket Descriptor: Not Supported 00:13:33.407 SGL Metadata Pointer: Not Supported 00:13:33.407 Oversized SGL: Not Supported 00:13:33.407 SGL Metadata Address: Not Supported 00:13:33.407 SGL Offset: Not Supported 00:13:33.407 Transport SGL Data Block: Not Supported 00:13:33.407 Replay Protected Memory Block: Not Supported 00:13:33.407 00:13:33.407 Firmware Slot Information 00:13:33.407 ========================= 00:13:33.407 Active slot: 1 00:13:33.407 Slot 1 Firmware Revision: 1.0 00:13:33.407 00:13:33.407 00:13:33.407 Commands Supported and Effects 00:13:33.407 ============================== 00:13:33.407 Admin Commands 00:13:33.407 -------------- 00:13:33.407 Delete I/O Submission Queue (00h): Supported 00:13:33.407 Create I/O Submission Queue (01h): Supported 00:13:33.407 Get Log Page (02h): Supported 00:13:33.407 Delete I/O Completion Queue (04h): Supported 00:13:33.407 Create I/O Completion Queue (05h): Supported 00:13:33.407 Identify (06h): Supported 00:13:33.407 Abort (08h): Supported 00:13:33.407 Set Features (09h): Supported 00:13:33.407 Get Features (0Ah): Supported 00:13:33.407 Asynchronous Event Request (0Ch): Supported 00:13:33.407 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:33.407 Directive Send (19h): Supported 00:13:33.407 Directive Receive (1Ah): Supported 00:13:33.407 Virtualization Management (1Ch): Supported 00:13:33.407 Doorbell Buffer Config (7Ch): Supported 00:13:33.407 Format NVM (80h): Supported LBA-Change 00:13:33.407 I/O Commands 00:13:33.407 ------------ 00:13:33.407 Flush (00h): Supported LBA-Change 00:13:33.407 Write (01h): Supported LBA-Change 00:13:33.407 Read (02h): Supported 00:13:33.407 Compare (05h): Supported 00:13:33.407 Write Zeroes (08h): Supported LBA-Change 00:13:33.407 Dataset Management (09h): Supported LBA-Change 00:13:33.407 Unknown (0Ch): Supported 00:13:33.407 Unknown (12h): Supported 00:13:33.407 Copy (19h): Supported LBA-Change 00:13:33.407 Unknown (1Dh): Supported LBA-Change 00:13:33.407 00:13:33.407 Error Log 00:13:33.407 ========= 00:13:33.407 00:13:33.407 Arbitration 00:13:33.407 =========== 00:13:33.407 Arbitration Burst: no limit 00:13:33.407 00:13:33.407 Power Management 00:13:33.407 ================ 00:13:33.407 Number of Power States: 1 00:13:33.407 Current Power State: Power State #0 00:13:33.407 Power State #0: 00:13:33.407 Max Power: 25.00 W 00:13:33.408 Non-Operational State: Operational 00:13:33.408 Entry Latency: 16 microseconds 00:13:33.408 Exit Latency: 4 microseconds 00:13:33.408 Relative Read Throughput: 0 00:13:33.408 Relative Read Latency: 0 00:13:33.408 Relative Write Throughput: 0 00:13:33.408 Relative Write Latency: 0 00:13:33.408 Idle Power: Not Reported 00:13:33.408 Active Power: Not Reported 00:13:33.408 Non-Operational Permissive Mode: Not Supported 00:13:33.408 00:13:33.408 Health Information 00:13:33.408 ================== 00:13:33.408 Critical Warnings: 00:13:33.408 Available Spare Space: OK 00:13:33.408 Temperature: [2024-11-26 18:17:26.603204] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64526 terminated unexpected 00:13:33.408 OK 00:13:33.408 Device Reliability: OK 00:13:33.408 Read Only: No 00:13:33.408 Volatile Memory Backup: OK 00:13:33.408 Current Temperature: 323 Kelvin (50 Celsius) 00:13:33.408 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:33.408 Available Spare: 0% 00:13:33.408 Available Spare Threshold: 0% 00:13:33.408 Life Percentage Used: 0% 00:13:33.408 Data Units Read: 1070 00:13:33.408 Data Units Written: 937 00:13:33.408 Host Read Commands: 47085 00:13:33.408 Host Write Commands: 45852 00:13:33.408 Controller Busy Time: 0 minutes 00:13:33.408 Power Cycles: 0 00:13:33.408 Power On Hours: 0 hours 00:13:33.408 Unsafe Shutdowns: 0 00:13:33.408 Unrecoverable Media Errors: 0 00:13:33.408 Lifetime Error Log Entries: 0 00:13:33.408 Warning Temperature Time: 0 minutes 00:13:33.408 Critical Temperature Time: 0 minutes 00:13:33.408 00:13:33.408 Number of Queues 00:13:33.408 ================ 00:13:33.408 Number of I/O Submission Queues: 64 00:13:33.408 Number of I/O Completion Queues: 64 00:13:33.408 00:13:33.408 ZNS Specific Controller Data 00:13:33.408 ============================ 00:13:33.408 Zone Append Size Limit: 0 00:13:33.408 00:13:33.408 00:13:33.408 Active Namespaces 00:13:33.408 ================= 00:13:33.408 Namespace ID:1 00:13:33.408 Error Recovery Timeout: Unlimited 00:13:33.408 Command Set Identifier: NVM (00h) 00:13:33.408 Deallocate: Supported 00:13:33.408 Deallocated/Unwritten Error: Supported 00:13:33.408 Deallocated Read Value: All 0x00 00:13:33.408 Deallocate in Write Zeroes: Not Supported 00:13:33.408 Deallocated Guard Field: 0xFFFF 00:13:33.408 Flush: Supported 00:13:33.408 Reservation: Not Supported 00:13:33.408 Namespace Sharing Capabilities: Private 00:13:33.408 Size (in LBAs): 1310720 (5GiB) 00:13:33.408 Capacity (in LBAs): 1310720 (5GiB) 00:13:33.408 Utilization (in LBAs): 1310720 (5GiB) 00:13:33.408 Thin Provisioning: Not Supported 00:13:33.408 Per-NS Atomic Units: No 00:13:33.408 Maximum Single Source Range Length: 128 00:13:33.408 Maximum Copy Length: 128 00:13:33.408 Maximum Source Range Count: 128 00:13:33.408 NGUID/EUI64 Never Reused: No 00:13:33.408 Namespace Write Protected: No 00:13:33.408 Number of LBA Formats: 8 00:13:33.408 Current LBA Format: LBA Format #04 00:13:33.408 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.408 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:33.408 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:33.408 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:33.408 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:33.408 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:33.408 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:33.408 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:33.408 00:13:33.408 NVM Specific Namespace Data 00:13:33.408 =========================== 00:13:33.408 Logical Block Storage Tag Mask: 0 00:13:33.408 Protection Information Capabilities: 00:13:33.408 16b Guard Protection Information Storage Tag Support: No 00:13:33.408 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:33.408 Storage Tag Check Read Support: No 00:13:33.408 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.408 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.408 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.408 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.408 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.408 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.408 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.408 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.408 ===================================================== 00:13:33.408 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:33.408 ===================================================== 00:13:33.408 Controller Capabilities/Features 00:13:33.408 ================================ 00:13:33.408 Vendor ID: 1b36 00:13:33.408 Subsystem Vendor ID: 1af4 00:13:33.408 Serial Number: 12343 00:13:33.408 Model Number: QEMU NVMe Ctrl 00:13:33.408 Firmware Version: 8.0.0 00:13:33.408 Recommended Arb Burst: 6 00:13:33.408 IEEE OUI Identifier: 00 54 52 00:13:33.408 Multi-path I/O 00:13:33.408 May have multiple subsystem ports: No 00:13:33.408 May have multiple controllers: Yes 00:13:33.408 Associated with SR-IOV VF: No 00:13:33.408 Max Data Transfer Size: 524288 00:13:33.408 Max Number of Namespaces: 256 00:13:33.408 Max Number of I/O Queues: 64 00:13:33.408 NVMe Specification Version (VS): 1.4 00:13:33.408 NVMe Specification Version (Identify): 1.4 00:13:33.408 Maximum Queue Entries: 2048 00:13:33.408 Contiguous Queues Required: Yes 00:13:33.408 Arbitration Mechanisms Supported 00:13:33.408 Weighted Round Robin: Not Supported 00:13:33.408 Vendor Specific: Not Supported 00:13:33.408 Reset Timeout: 7500 ms 00:13:33.408 Doorbell Stride: 4 bytes 00:13:33.408 NVM Subsystem Reset: Not Supported 00:13:33.408 Command Sets Supported 00:13:33.408 NVM Command Set: Supported 00:13:33.408 Boot Partition: Not Supported 00:13:33.408 Memory Page Size Minimum: 4096 bytes 00:13:33.408 Memory Page Size Maximum: 65536 bytes 00:13:33.408 Persistent Memory Region: Not Supported 00:13:33.408 Optional Asynchronous Events Supported 00:13:33.408 Namespace Attribute Notices: Supported 00:13:33.408 Firmware Activation Notices: Not Supported 00:13:33.408 ANA Change Notices: Not Supported 00:13:33.408 PLE Aggregate Log Change Notices: Not Supported 00:13:33.408 LBA Status Info Alert Notices: Not Supported 00:13:33.408 EGE Aggregate Log Change Notices: Not Supported 00:13:33.408 Normal NVM Subsystem Shutdown event: Not Supported 00:13:33.408 Zone Descriptor Change Notices: Not Supported 00:13:33.408 Discovery Log Change Notices: Not Supported 00:13:33.408 Controller Attributes 00:13:33.408 128-bit Host Identifier: Not Supported 00:13:33.408 Non-Operational Permissive Mode: Not Supported 00:13:33.408 NVM Sets: Not Supported 00:13:33.408 Read Recovery Levels: Not Supported 00:13:33.408 Endurance Groups: Supported 00:13:33.408 Predictable Latency Mode: Not Supported 00:13:33.408 Traffic Based Keep ALive: Not Supported 00:13:33.408 Namespace Granularity: Not Supported 00:13:33.408 SQ Associations: Not Supported 00:13:33.408 UUID List: Not Supported 00:13:33.408 Multi-Domain Subsystem: Not Supported 00:13:33.408 Fixed Capacity Management: Not Supported 00:13:33.408 Variable Capacity Management: Not Supported 00:13:33.408 Delete Endurance Group: Not Supported 00:13:33.408 Delete NVM Set: Not Supported 00:13:33.408 Extended LBA Formats Supported: Supported 00:13:33.408 Flexible Data Placement Supported: Supported 00:13:33.408 00:13:33.408 Controller Memory Buffer Support 00:13:33.408 ================================ 00:13:33.408 Supported: No 00:13:33.408 00:13:33.408 Persistent Memory Region Support 00:13:33.408 ================================ 00:13:33.408 Supported: No 00:13:33.408 00:13:33.408 Admin Command Set Attributes 00:13:33.408 ============================ 00:13:33.408 Security Send/Receive: Not Supported 00:13:33.408 Format NVM: Supported 00:13:33.408 Firmware Activate/Download: Not Supported 00:13:33.408 Namespace Management: Supported 00:13:33.408 Device Self-Test: Not Supported 00:13:33.408 Directives: Supported 00:13:33.408 NVMe-MI: Not Supported 00:13:33.408 Virtualization Management: Not Supported 00:13:33.408 Doorbell Buffer Config: Supported 00:13:33.408 Get LBA Status Capability: Not Supported 00:13:33.408 Command & Feature Lockdown Capability: Not Supported 00:13:33.408 Abort Command Limit: 4 00:13:33.408 Async Event Request Limit: 4 00:13:33.408 Number of Firmware Slots: N/A 00:13:33.408 Firmware Slot 1 Read-Only: N/A 00:13:33.408 Firmware Activation Without Reset: N/A 00:13:33.408 Multiple Update Detection Support: N/A 00:13:33.408 Firmware Update Granularity: No Information Provided 00:13:33.408 Per-Namespace SMART Log: Yes 00:13:33.408 Asymmetric Namespace Access Log Page: Not Supported 00:13:33.408 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:33.408 Command Effects Log Page: Supported 00:13:33.408 Get Log Page Extended Data: Supported 00:13:33.409 Telemetry Log Pages: Not Supported 00:13:33.409 Persistent Event Log Pages: Not Supported 00:13:33.409 Supported Log Pages Log Page: May Support 00:13:33.409 Commands Supported & Effects Log Page: Not Supported 00:13:33.409 Feature Identifiers & Effects Log Page:May Support 00:13:33.409 NVMe-MI Commands & Effects Log Page: May Support 00:13:33.409 Data Area 4 for Telemetry Log: Not Supported 00:13:33.409 Error Log Page Entries Supported: 1 00:13:33.409 Keep Alive: Not Supported 00:13:33.409 00:13:33.409 NVM Command Set Attributes 00:13:33.409 ========================== 00:13:33.409 Submission Queue Entry Size 00:13:33.409 Max: 64 00:13:33.409 Min: 64 00:13:33.409 Completion Queue Entry Size 00:13:33.409 Max: 16 00:13:33.409 Min: 16 00:13:33.409 Number of Namespaces: 256 00:13:33.409 Compare Command: Supported 00:13:33.409 Write Uncorrectable Command: Not Supported 00:13:33.409 Dataset Management Command: Supported 00:13:33.409 Write Zeroes Command: Supported 00:13:33.409 Set Features Save Field: Supported 00:13:33.409 Reservations: Not Supported 00:13:33.409 Timestamp: Supported 00:13:33.409 Copy: Supported 00:13:33.409 Volatile Write Cache: Present 00:13:33.409 Atomic Write Unit (Normal): 1 00:13:33.409 Atomic Write Unit (PFail): 1 00:13:33.409 Atomic Compare & Write Unit: 1 00:13:33.409 Fused Compare & Write: Not Supported 00:13:33.409 Scatter-Gather List 00:13:33.409 SGL Command Set: Supported 00:13:33.409 SGL Keyed: Not Supported 00:13:33.409 SGL Bit Bucket Descriptor: Not Supported 00:13:33.409 SGL Metadata Pointer: Not Supported 00:13:33.409 Oversized SGL: Not Supported 00:13:33.409 SGL Metadata Address: Not Supported 00:13:33.409 SGL Offset: Not Supported 00:13:33.409 Transport SGL Data Block: Not Supported 00:13:33.409 Replay Protected Memory Block: Not Supported 00:13:33.409 00:13:33.409 Firmware Slot Information 00:13:33.409 ========================= 00:13:33.409 Active slot: 1 00:13:33.409 Slot 1 Firmware Revision: 1.0 00:13:33.409 00:13:33.409 00:13:33.409 Commands Supported and Effects 00:13:33.409 ============================== 00:13:33.409 Admin Commands 00:13:33.409 -------------- 00:13:33.409 Delete I/O Submission Queue (00h): Supported 00:13:33.409 Create I/O Submission Queue (01h): Supported 00:13:33.409 Get Log Page (02h): Supported 00:13:33.409 Delete I/O Completion Queue (04h): Supported 00:13:33.409 Create I/O Completion Queue (05h): Supported 00:13:33.409 Identify (06h): Supported 00:13:33.409 Abort (08h): Supported 00:13:33.409 Set Features (09h): Supported 00:13:33.409 Get Features (0Ah): Supported 00:13:33.409 Asynchronous Event Request (0Ch): Supported 00:13:33.409 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:33.409 Directive Send (19h): Supported 00:13:33.409 Directive Receive (1Ah): Supported 00:13:33.409 Virtualization Management (1Ch): Supported 00:13:33.409 Doorbell Buffer Config (7Ch): Supported 00:13:33.409 Format NVM (80h): Supported LBA-Change 00:13:33.409 I/O Commands 00:13:33.409 ------------ 00:13:33.409 Flush (00h): Supported LBA-Change 00:13:33.409 Write (01h): Supported LBA-Change 00:13:33.409 Read (02h): Supported 00:13:33.409 Compare (05h): Supported 00:13:33.409 Write Zeroes (08h): Supported LBA-Change 00:13:33.409 Dataset Management (09h): Supported LBA-Change 00:13:33.409 Unknown (0Ch): Supported 00:13:33.409 Unknown (12h): Supported 00:13:33.409 Copy (19h): Supported LBA-Change 00:13:33.409 Unknown (1Dh): Supported LBA-Change 00:13:33.409 00:13:33.409 Error Log 00:13:33.409 ========= 00:13:33.409 00:13:33.409 Arbitration 00:13:33.409 =========== 00:13:33.409 Arbitration Burst: no limit 00:13:33.409 00:13:33.409 Power Management 00:13:33.409 ================ 00:13:33.409 Number of Power States: 1 00:13:33.409 Current Power State: Power State #0 00:13:33.409 Power State #0: 00:13:33.409 Max Power: 25.00 W 00:13:33.409 Non-Operational State: Operational 00:13:33.409 Entry Latency: 16 microseconds 00:13:33.409 Exit Latency: 4 microseconds 00:13:33.409 Relative Read Throughput: 0 00:13:33.409 Relative Read Latency: 0 00:13:33.409 Relative Write Throughput: 0 00:13:33.409 Relative Write Latency: 0 00:13:33.409 Idle Power: Not Reported 00:13:33.409 Active Power: Not Reported 00:13:33.409 Non-Operational Permissive Mode: Not Supported 00:13:33.409 00:13:33.409 Health Information 00:13:33.409 ================== 00:13:33.409 Critical Warnings: 00:13:33.409 Available Spare Space: OK 00:13:33.409 Temperature: OK 00:13:33.409 Device Reliability: OK 00:13:33.409 Read Only: No 00:13:33.409 Volatile Memory Backup: OK 00:13:33.409 Current Temperature: 323 Kelvin (50 Celsius) 00:13:33.409 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:33.409 Available Spare: 0% 00:13:33.409 Available Spare Threshold: 0% 00:13:33.409 Life Percentage Used: 0% 00:13:33.409 Data Units Read: 797 00:13:33.409 Data Units Written: 726 00:13:33.409 Host Read Commands: 33246 00:13:33.409 Host Write Commands: 32669 00:13:33.409 Controller Busy Time: 0 minutes 00:13:33.409 Power Cycles: 0 00:13:33.409 Power On Hours: 0 hours 00:13:33.409 Unsafe Shutdowns: 0 00:13:33.409 Unrecoverable Media Errors: 0 00:13:33.409 Lifetime Error Log Entries: 0 00:13:33.409 Warning Temperature Time: 0 minutes 00:13:33.409 Critical Temperature Time: 0 minutes 00:13:33.409 00:13:33.409 Number of Queues 00:13:33.409 ================ 00:13:33.409 Number of I/O Submission Queues: 64 00:13:33.409 Number of I/O Completion Queues: 64 00:13:33.409 00:13:33.409 ZNS Specific Controller Data 00:13:33.409 ============================ 00:13:33.409 Zone Append Size Limit: 0 00:13:33.409 00:13:33.409 00:13:33.409 Active Namespaces 00:13:33.409 ================= 00:13:33.409 Namespace ID:1 00:13:33.409 Error Recovery Timeout: Unlimited 00:13:33.409 Command Set Identifier: NVM (00h) 00:13:33.409 Deallocate: Supported 00:13:33.409 Deallocated/Unwritten Error: Supported 00:13:33.409 Deallocated Read Value: All 0x00 00:13:33.409 Deallocate in Write Zeroes: Not Supported 00:13:33.409 Deallocated Guard Field: 0xFFFF 00:13:33.409 Flush: Supported 00:13:33.409 Reservation: Not Supported 00:13:33.409 Namespace Sharing Capabilities: Multiple Controllers 00:13:33.409 Size (in LBAs): 262144 (1GiB) 00:13:33.409 Capacity (in LBAs): 262144 (1GiB) 00:13:33.409 Utilization (in LBAs): 262144 (1GiB) 00:13:33.409 Thin Provisioning: Not Supported 00:13:33.409 Per-NS Atomic Units: No 00:13:33.409 Maximum Single Source Range Length: 128 00:13:33.409 Maximum Copy Length: 128 00:13:33.409 Maximum Source Range Count: 128 00:13:33.409 NGUID/EUI64 Never Reused: No 00:13:33.409 Namespace Write Protected: No 00:13:33.409 Endurance group ID: 1 00:13:33.409 Number of LBA Formats: 8 00:13:33.409 Current LBA Format: LBA Format #04 00:13:33.409 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.409 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:33.409 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:33.409 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:33.409 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:33.409 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:33.409 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:33.409 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:33.409 00:13:33.409 Get Feature FDP: 00:13:33.409 ================ 00:13:33.409 Enabled: Yes 00:13:33.409 FDP configuration index: 0 00:13:33.409 00:13:33.409 FDP configurations log page 00:13:33.409 =========================== 00:13:33.409 Number of FDP configurations: 1 00:13:33.409 Version: 0 00:13:33.409 Size: 112 00:13:33.409 FDP Configuration Descriptor: 0 00:13:33.409 Descriptor Size: 96 00:13:33.409 Reclaim Group Identifier format: 2 00:13:33.409 FDP Volatile Write Cache: Not Present 00:13:33.409 FDP Configuration: Valid 00:13:33.409 Vendor Specific Size: 0 00:13:33.409 Number of Reclaim Groups: 2 00:13:33.409 Number of Recalim Unit Handles: 8 00:13:33.409 Max Placement Identifiers: 128 00:13:33.409 Number of Namespaces Suppprted: 256 00:13:33.409 Reclaim unit Nominal Size: 6000000 bytes 00:13:33.409 Estimated Reclaim Unit Time Limit: Not Reported 00:13:33.409 RUH Desc #000: RUH Type: Initially Isolated 00:13:33.409 RUH Desc #001: RUH Type: Initially Isolated 00:13:33.409 RUH Desc #002: RUH Type: Initially Isolated 00:13:33.409 RUH Desc #003: RUH Type: Initially Isolated 00:13:33.409 RUH Desc #004: RUH Type: Initially Isolated 00:13:33.409 RUH Desc #005: RUH Type: Initially Isolated 00:13:33.409 RUH Desc #006: RUH Type: Initially Isolated 00:13:33.409 RUH Desc #007: RUH Type: Initially Isolated 00:13:33.409 00:13:33.409 FDP reclaim unit handle usage log page 00:13:33.409 ====================================== 00:13:33.409 Number of Reclaim Unit Handles: 8 00:13:33.409 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:33.409 RUH Usage Desc #001: RUH Attributes: Unused 00:13:33.410 RUH Usage Desc #002: RUH Attributes: Unused 00:13:33.410 RUH Usage Desc #003: RUH Attributes: Unused 00:13:33.410 RUH Usage Desc #004: RUH Attributes: Unused 00:13:33.410 RUH Usage Desc #005: RUH Attributes: Unused 00:13:33.410 RUH Usage Desc #006: RUH Attributes: Unused 00:13:33.410 RUH Usage Desc #007: RUH Attributes: Unused 00:13:33.410 00:13:33.410 FDP statistics log page 00:13:33.410 ======================= 00:13:33.410 Host bytes with metadata written: 457351168 00:13:33.410 Medi[2024-11-26 18:17:26.604309] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64526 terminated unexpected 00:13:33.410 a bytes with metadata written: 457416704 00:13:33.410 Media bytes erased: 0 00:13:33.410 00:13:33.410 FDP events log page 00:13:33.410 =================== 00:13:33.410 Number of FDP events: 0 00:13:33.410 00:13:33.410 NVM Specific Namespace Data 00:13:33.410 =========================== 00:13:33.410 Logical Block Storage Tag Mask: 0 00:13:33.410 Protection Information Capabilities: 00:13:33.410 16b Guard Protection Information Storage Tag Support: No 00:13:33.410 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:33.410 Storage Tag Check Read Support: No 00:13:33.410 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.410 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.410 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.410 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.410 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.410 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.410 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.410 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.410 ===================================================== 00:13:33.410 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:33.410 ===================================================== 00:13:33.410 Controller Capabilities/Features 00:13:33.410 ================================ 00:13:33.410 Vendor ID: 1b36 00:13:33.410 Subsystem Vendor ID: 1af4 00:13:33.410 Serial Number: 12342 00:13:33.410 Model Number: QEMU NVMe Ctrl 00:13:33.410 Firmware Version: 8.0.0 00:13:33.410 Recommended Arb Burst: 6 00:13:33.410 IEEE OUI Identifier: 00 54 52 00:13:33.410 Multi-path I/O 00:13:33.410 May have multiple subsystem ports: No 00:13:33.410 May have multiple controllers: No 00:13:33.410 Associated with SR-IOV VF: No 00:13:33.410 Max Data Transfer Size: 524288 00:13:33.410 Max Number of Namespaces: 256 00:13:33.410 Max Number of I/O Queues: 64 00:13:33.410 NVMe Specification Version (VS): 1.4 00:13:33.410 NVMe Specification Version (Identify): 1.4 00:13:33.410 Maximum Queue Entries: 2048 00:13:33.410 Contiguous Queues Required: Yes 00:13:33.410 Arbitration Mechanisms Supported 00:13:33.410 Weighted Round Robin: Not Supported 00:13:33.410 Vendor Specific: Not Supported 00:13:33.410 Reset Timeout: 7500 ms 00:13:33.410 Doorbell Stride: 4 bytes 00:13:33.410 NVM Subsystem Reset: Not Supported 00:13:33.410 Command Sets Supported 00:13:33.410 NVM Command Set: Supported 00:13:33.410 Boot Partition: Not Supported 00:13:33.410 Memory Page Size Minimum: 4096 bytes 00:13:33.410 Memory Page Size Maximum: 65536 bytes 00:13:33.410 Persistent Memory Region: Not Supported 00:13:33.410 Optional Asynchronous Events Supported 00:13:33.410 Namespace Attribute Notices: Supported 00:13:33.410 Firmware Activation Notices: Not Supported 00:13:33.410 ANA Change Notices: Not Supported 00:13:33.410 PLE Aggregate Log Change Notices: Not Supported 00:13:33.410 LBA Status Info Alert Notices: Not Supported 00:13:33.410 EGE Aggregate Log Change Notices: Not Supported 00:13:33.410 Normal NVM Subsystem Shutdown event: Not Supported 00:13:33.410 Zone Descriptor Change Notices: Not Supported 00:13:33.410 Discovery Log Change Notices: Not Supported 00:13:33.410 Controller Attributes 00:13:33.410 128-bit Host Identifier: Not Supported 00:13:33.410 Non-Operational Permissive Mode: Not Supported 00:13:33.410 NVM Sets: Not Supported 00:13:33.410 Read Recovery Levels: Not Supported 00:13:33.410 Endurance Groups: Not Supported 00:13:33.410 Predictable Latency Mode: Not Supported 00:13:33.410 Traffic Based Keep ALive: Not Supported 00:13:33.410 Namespace Granularity: Not Supported 00:13:33.410 SQ Associations: Not Supported 00:13:33.410 UUID List: Not Supported 00:13:33.410 Multi-Domain Subsystem: Not Supported 00:13:33.410 Fixed Capacity Management: Not Supported 00:13:33.410 Variable Capacity Management: Not Supported 00:13:33.410 Delete Endurance Group: Not Supported 00:13:33.410 Delete NVM Set: Not Supported 00:13:33.410 Extended LBA Formats Supported: Supported 00:13:33.410 Flexible Data Placement Supported: Not Supported 00:13:33.410 00:13:33.410 Controller Memory Buffer Support 00:13:33.410 ================================ 00:13:33.410 Supported: No 00:13:33.410 00:13:33.410 Persistent Memory Region Support 00:13:33.410 ================================ 00:13:33.410 Supported: No 00:13:33.410 00:13:33.410 Admin Command Set Attributes 00:13:33.410 ============================ 00:13:33.410 Security Send/Receive: Not Supported 00:13:33.410 Format NVM: Supported 00:13:33.410 Firmware Activate/Download: Not Supported 00:13:33.410 Namespace Management: Supported 00:13:33.410 Device Self-Test: Not Supported 00:13:33.410 Directives: Supported 00:13:33.410 NVMe-MI: Not Supported 00:13:33.410 Virtualization Management: Not Supported 00:13:33.410 Doorbell Buffer Config: Supported 00:13:33.410 Get LBA Status Capability: Not Supported 00:13:33.410 Command & Feature Lockdown Capability: Not Supported 00:13:33.410 Abort Command Limit: 4 00:13:33.410 Async Event Request Limit: 4 00:13:33.410 Number of Firmware Slots: N/A 00:13:33.410 Firmware Slot 1 Read-Only: N/A 00:13:33.410 Firmware Activation Without Reset: N/A 00:13:33.410 Multiple Update Detection Support: N/A 00:13:33.410 Firmware Update Granularity: No Information Provided 00:13:33.410 Per-Namespace SMART Log: Yes 00:13:33.410 Asymmetric Namespace Access Log Page: Not Supported 00:13:33.410 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:33.410 Command Effects Log Page: Supported 00:13:33.410 Get Log Page Extended Data: Supported 00:13:33.410 Telemetry Log Pages: Not Supported 00:13:33.410 Persistent Event Log Pages: Not Supported 00:13:33.410 Supported Log Pages Log Page: May Support 00:13:33.410 Commands Supported & Effects Log Page: Not Supported 00:13:33.410 Feature Identifiers & Effects Log Page:May Support 00:13:33.410 NVMe-MI Commands & Effects Log Page: May Support 00:13:33.410 Data Area 4 for Telemetry Log: Not Supported 00:13:33.410 Error Log Page Entries Supported: 1 00:13:33.410 Keep Alive: Not Supported 00:13:33.410 00:13:33.410 NVM Command Set Attributes 00:13:33.410 ========================== 00:13:33.410 Submission Queue Entry Size 00:13:33.410 Max: 64 00:13:33.410 Min: 64 00:13:33.410 Completion Queue Entry Size 00:13:33.410 Max: 16 00:13:33.410 Min: 16 00:13:33.410 Number of Namespaces: 256 00:13:33.410 Compare Command: Supported 00:13:33.410 Write Uncorrectable Command: Not Supported 00:13:33.410 Dataset Management Command: Supported 00:13:33.410 Write Zeroes Command: Supported 00:13:33.410 Set Features Save Field: Supported 00:13:33.410 Reservations: Not Supported 00:13:33.410 Timestamp: Supported 00:13:33.410 Copy: Supported 00:13:33.410 Volatile Write Cache: Present 00:13:33.410 Atomic Write Unit (Normal): 1 00:13:33.410 Atomic Write Unit (PFail): 1 00:13:33.410 Atomic Compare & Write Unit: 1 00:13:33.410 Fused Compare & Write: Not Supported 00:13:33.410 Scatter-Gather List 00:13:33.410 SGL Command Set: Supported 00:13:33.410 SGL Keyed: Not Supported 00:13:33.410 SGL Bit Bucket Descriptor: Not Supported 00:13:33.410 SGL Metadata Pointer: Not Supported 00:13:33.410 Oversized SGL: Not Supported 00:13:33.411 SGL Metadata Address: Not Supported 00:13:33.411 SGL Offset: Not Supported 00:13:33.411 Transport SGL Data Block: Not Supported 00:13:33.411 Replay Protected Memory Block: Not Supported 00:13:33.411 00:13:33.411 Firmware Slot Information 00:13:33.411 ========================= 00:13:33.411 Active slot: 1 00:13:33.411 Slot 1 Firmware Revision: 1.0 00:13:33.411 00:13:33.411 00:13:33.411 Commands Supported and Effects 00:13:33.411 ============================== 00:13:33.411 Admin Commands 00:13:33.411 -------------- 00:13:33.411 Delete I/O Submission Queue (00h): Supported 00:13:33.411 Create I/O Submission Queue (01h): Supported 00:13:33.411 Get Log Page (02h): Supported 00:13:33.411 Delete I/O Completion Queue (04h): Supported 00:13:33.411 Create I/O Completion Queue (05h): Supported 00:13:33.411 Identify (06h): Supported 00:13:33.411 Abort (08h): Supported 00:13:33.411 Set Features (09h): Supported 00:13:33.411 Get Features (0Ah): Supported 00:13:33.411 Asynchronous Event Request (0Ch): Supported 00:13:33.411 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:33.411 Directive Send (19h): Supported 00:13:33.411 Directive Receive (1Ah): Supported 00:13:33.411 Virtualization Management (1Ch): Supported 00:13:33.411 Doorbell Buffer Config (7Ch): Supported 00:13:33.411 Format NVM (80h): Supported LBA-Change 00:13:33.411 I/O Commands 00:13:33.411 ------------ 00:13:33.411 Flush (00h): Supported LBA-Change 00:13:33.411 Write (01h): Supported LBA-Change 00:13:33.411 Read (02h): Supported 00:13:33.411 Compare (05h): Supported 00:13:33.411 Write Zeroes (08h): Supported LBA-Change 00:13:33.411 Dataset Management (09h): Supported LBA-Change 00:13:33.411 Unknown (0Ch): Supported 00:13:33.411 Unknown (12h): Supported 00:13:33.411 Copy (19h): Supported LBA-Change 00:13:33.411 Unknown (1Dh): Supported LBA-Change 00:13:33.411 00:13:33.411 Error Log 00:13:33.411 ========= 00:13:33.411 00:13:33.411 Arbitration 00:13:33.411 =========== 00:13:33.411 Arbitration Burst: no limit 00:13:33.411 00:13:33.411 Power Management 00:13:33.411 ================ 00:13:33.411 Number of Power States: 1 00:13:33.411 Current Power State: Power State #0 00:13:33.411 Power State #0: 00:13:33.411 Max Power: 25.00 W 00:13:33.411 Non-Operational State: Operational 00:13:33.411 Entry Latency: 16 microseconds 00:13:33.411 Exit Latency: 4 microseconds 00:13:33.411 Relative Read Throughput: 0 00:13:33.411 Relative Read Latency: 0 00:13:33.411 Relative Write Throughput: 0 00:13:33.411 Relative Write Latency: 0 00:13:33.411 Idle Power: Not Reported 00:13:33.411 Active Power: Not Reported 00:13:33.411 Non-Operational Permissive Mode: Not Supported 00:13:33.411 00:13:33.411 Health Information 00:13:33.411 ================== 00:13:33.411 Critical Warnings: 00:13:33.411 Available Spare Space: OK 00:13:33.411 Temperature: OK 00:13:33.411 Device Reliability: OK 00:13:33.411 Read Only: No 00:13:33.411 Volatile Memory Backup: OK 00:13:33.411 Current Temperature: 323 Kelvin (50 Celsius) 00:13:33.411 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:33.411 Available Spare: 0% 00:13:33.411 Available Spare Threshold: 0% 00:13:33.411 Life Percentage Used: 0% 00:13:33.411 Data Units Read: 2221 00:13:33.411 Data Units Written: 2009 00:13:33.411 Host Read Commands: 98113 00:13:33.411 Host Write Commands: 96382 00:13:33.411 Controller Busy Time: 0 minutes 00:13:33.411 Power Cycles: 0 00:13:33.411 Power On Hours: 0 hours 00:13:33.411 Unsafe Shutdowns: 0 00:13:33.411 Unrecoverable Media Errors: 0 00:13:33.411 Lifetime Error Log Entries: 0 00:13:33.411 Warning Temperature Time: 0 minutes 00:13:33.411 Critical Temperature Time: 0 minutes 00:13:33.411 00:13:33.411 Number of Queues 00:13:33.411 ================ 00:13:33.411 Number of I/O Submission Queues: 64 00:13:33.411 Number of I/O Completion Queues: 64 00:13:33.411 00:13:33.411 ZNS Specific Controller Data 00:13:33.411 ============================ 00:13:33.411 Zone Append Size Limit: 0 00:13:33.411 00:13:33.411 00:13:33.411 Active Namespaces 00:13:33.411 ================= 00:13:33.411 Namespace ID:1 00:13:33.411 Error Recovery Timeout: Unlimited 00:13:33.411 Command Set Identifier: NVM (00h) 00:13:33.411 Deallocate: Supported 00:13:33.411 Deallocated/Unwritten Error: Supported 00:13:33.411 Deallocated Read Value: All 0x00 00:13:33.411 Deallocate in Write Zeroes: Not Supported 00:13:33.411 Deallocated Guard Field: 0xFFFF 00:13:33.411 Flush: Supported 00:13:33.411 Reservation: Not Supported 00:13:33.411 Namespace Sharing Capabilities: Private 00:13:33.411 Size (in LBAs): 1048576 (4GiB) 00:13:33.411 Capacity (in LBAs): 1048576 (4GiB) 00:13:33.411 Utilization (in LBAs): 1048576 (4GiB) 00:13:33.411 Thin Provisioning: Not Supported 00:13:33.411 Per-NS Atomic Units: No 00:13:33.411 Maximum Single Source Range Length: 128 00:13:33.411 Maximum Copy Length: 128 00:13:33.411 Maximum Source Range Count: 128 00:13:33.411 NGUID/EUI64 Never Reused: No 00:13:33.411 Namespace Write Protected: No 00:13:33.411 Number of LBA Formats: 8 00:13:33.411 Current LBA Format: LBA Format #04 00:13:33.411 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.411 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:33.411 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:33.411 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:33.411 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:33.411 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:33.411 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:33.411 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:33.411 00:13:33.411 NVM Specific Namespace Data 00:13:33.411 =========================== 00:13:33.411 Logical Block Storage Tag Mask: 0 00:13:33.411 Protection Information Capabilities: 00:13:33.411 16b Guard Protection Information Storage Tag Support: No 00:13:33.411 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:33.411 Storage Tag Check Read Support: No 00:13:33.411 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Namespace ID:2 00:13:33.411 Error Recovery Timeout: Unlimited 00:13:33.411 Command Set Identifier: NVM (00h) 00:13:33.411 Deallocate: Supported 00:13:33.411 Deallocated/Unwritten Error: Supported 00:13:33.411 Deallocated Read Value: All 0x00 00:13:33.411 Deallocate in Write Zeroes: Not Supported 00:13:33.411 Deallocated Guard Field: 0xFFFF 00:13:33.411 Flush: Supported 00:13:33.411 Reservation: Not Supported 00:13:33.411 Namespace Sharing Capabilities: Private 00:13:33.411 Size (in LBAs): 1048576 (4GiB) 00:13:33.411 Capacity (in LBAs): 1048576 (4GiB) 00:13:33.411 Utilization (in LBAs): 1048576 (4GiB) 00:13:33.411 Thin Provisioning: Not Supported 00:13:33.411 Per-NS Atomic Units: No 00:13:33.411 Maximum Single Source Range Length: 128 00:13:33.411 Maximum Copy Length: 128 00:13:33.411 Maximum Source Range Count: 128 00:13:33.411 NGUID/EUI64 Never Reused: No 00:13:33.411 Namespace Write Protected: No 00:13:33.411 Number of LBA Formats: 8 00:13:33.411 Current LBA Format: LBA Format #04 00:13:33.411 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.411 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:33.411 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:33.411 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:33.411 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:33.411 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:33.411 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:33.411 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:33.411 00:13:33.411 NVM Specific Namespace Data 00:13:33.411 =========================== 00:13:33.411 Logical Block Storage Tag Mask: 0 00:13:33.411 Protection Information Capabilities: 00:13:33.411 16b Guard Protection Information Storage Tag Support: No 00:13:33.411 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:33.411 Storage Tag Check Read Support: No 00:13:33.411 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.411 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 Namespace ID:3 00:13:33.412 Error Recovery Timeout: Unlimited 00:13:33.412 Command Set Identifier: NVM (00h) 00:13:33.412 Deallocate: Supported 00:13:33.412 Deallocated/Unwritten Error: Supported 00:13:33.412 Deallocated Read Value: All 0x00 00:13:33.412 Deallocate in Write Zeroes: Not Supported 00:13:33.412 Deallocated Guard Field: 0xFFFF 00:13:33.412 Flush: Supported 00:13:33.412 Reservation: Not Supported 00:13:33.412 Namespace Sharing Capabilities: Private 00:13:33.412 Size (in LBAs): 1048576 (4GiB) 00:13:33.412 Capacity (in LBAs): 1048576 (4GiB) 00:13:33.412 Utilization (in LBAs): 1048576 (4GiB) 00:13:33.412 Thin Provisioning: Not Supported 00:13:33.412 Per-NS Atomic Units: No 00:13:33.412 Maximum Single Source Range Length: 128 00:13:33.412 Maximum Copy Length: 128 00:13:33.412 Maximum Source Range Count: 128 00:13:33.412 NGUID/EUI64 Never Reused: No 00:13:33.412 Namespace Write Protected: No 00:13:33.412 Number of LBA Formats: 8 00:13:33.412 Current LBA Format: LBA Format #04 00:13:33.412 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.412 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:33.412 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:33.412 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:33.412 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:33.412 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:33.412 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:33.412 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:33.412 00:13:33.412 NVM Specific Namespace Data 00:13:33.412 =========================== 00:13:33.412 Logical Block Storage Tag Mask: 0 00:13:33.412 Protection Information Capabilities: 00:13:33.412 16b Guard Protection Information Storage Tag Support: No 00:13:33.412 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:33.412 Storage Tag Check Read Support: No 00:13:33.412 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.412 18:17:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:33.412 18:17:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:13:33.671 ===================================================== 00:13:33.671 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:33.671 ===================================================== 00:13:33.671 Controller Capabilities/Features 00:13:33.671 ================================ 00:13:33.671 Vendor ID: 1b36 00:13:33.671 Subsystem Vendor ID: 1af4 00:13:33.671 Serial Number: 12340 00:13:33.671 Model Number: QEMU NVMe Ctrl 00:13:33.671 Firmware Version: 8.0.0 00:13:33.671 Recommended Arb Burst: 6 00:13:33.671 IEEE OUI Identifier: 00 54 52 00:13:33.671 Multi-path I/O 00:13:33.671 May have multiple subsystem ports: No 00:13:33.671 May have multiple controllers: No 00:13:33.671 Associated with SR-IOV VF: No 00:13:33.671 Max Data Transfer Size: 524288 00:13:33.671 Max Number of Namespaces: 256 00:13:33.671 Max Number of I/O Queues: 64 00:13:33.671 NVMe Specification Version (VS): 1.4 00:13:33.671 NVMe Specification Version (Identify): 1.4 00:13:33.671 Maximum Queue Entries: 2048 00:13:33.671 Contiguous Queues Required: Yes 00:13:33.671 Arbitration Mechanisms Supported 00:13:33.671 Weighted Round Robin: Not Supported 00:13:33.671 Vendor Specific: Not Supported 00:13:33.671 Reset Timeout: 7500 ms 00:13:33.671 Doorbell Stride: 4 bytes 00:13:33.671 NVM Subsystem Reset: Not Supported 00:13:33.671 Command Sets Supported 00:13:33.671 NVM Command Set: Supported 00:13:33.671 Boot Partition: Not Supported 00:13:33.671 Memory Page Size Minimum: 4096 bytes 00:13:33.671 Memory Page Size Maximum: 65536 bytes 00:13:33.671 Persistent Memory Region: Not Supported 00:13:33.671 Optional Asynchronous Events Supported 00:13:33.671 Namespace Attribute Notices: Supported 00:13:33.671 Firmware Activation Notices: Not Supported 00:13:33.671 ANA Change Notices: Not Supported 00:13:33.671 PLE Aggregate Log Change Notices: Not Supported 00:13:33.671 LBA Status Info Alert Notices: Not Supported 00:13:33.671 EGE Aggregate Log Change Notices: Not Supported 00:13:33.671 Normal NVM Subsystem Shutdown event: Not Supported 00:13:33.671 Zone Descriptor Change Notices: Not Supported 00:13:33.671 Discovery Log Change Notices: Not Supported 00:13:33.671 Controller Attributes 00:13:33.671 128-bit Host Identifier: Not Supported 00:13:33.671 Non-Operational Permissive Mode: Not Supported 00:13:33.671 NVM Sets: Not Supported 00:13:33.671 Read Recovery Levels: Not Supported 00:13:33.671 Endurance Groups: Not Supported 00:13:33.671 Predictable Latency Mode: Not Supported 00:13:33.671 Traffic Based Keep ALive: Not Supported 00:13:33.671 Namespace Granularity: Not Supported 00:13:33.671 SQ Associations: Not Supported 00:13:33.671 UUID List: Not Supported 00:13:33.671 Multi-Domain Subsystem: Not Supported 00:13:33.671 Fixed Capacity Management: Not Supported 00:13:33.671 Variable Capacity Management: Not Supported 00:13:33.671 Delete Endurance Group: Not Supported 00:13:33.671 Delete NVM Set: Not Supported 00:13:33.671 Extended LBA Formats Supported: Supported 00:13:33.671 Flexible Data Placement Supported: Not Supported 00:13:33.671 00:13:33.672 Controller Memory Buffer Support 00:13:33.672 ================================ 00:13:33.672 Supported: No 00:13:33.672 00:13:33.672 Persistent Memory Region Support 00:13:33.672 ================================ 00:13:33.672 Supported: No 00:13:33.672 00:13:33.672 Admin Command Set Attributes 00:13:33.672 ============================ 00:13:33.672 Security Send/Receive: Not Supported 00:13:33.672 Format NVM: Supported 00:13:33.672 Firmware Activate/Download: Not Supported 00:13:33.672 Namespace Management: Supported 00:13:33.672 Device Self-Test: Not Supported 00:13:33.672 Directives: Supported 00:13:33.672 NVMe-MI: Not Supported 00:13:33.672 Virtualization Management: Not Supported 00:13:33.672 Doorbell Buffer Config: Supported 00:13:33.672 Get LBA Status Capability: Not Supported 00:13:33.672 Command & Feature Lockdown Capability: Not Supported 00:13:33.672 Abort Command Limit: 4 00:13:33.672 Async Event Request Limit: 4 00:13:33.672 Number of Firmware Slots: N/A 00:13:33.672 Firmware Slot 1 Read-Only: N/A 00:13:33.672 Firmware Activation Without Reset: N/A 00:13:33.672 Multiple Update Detection Support: N/A 00:13:33.672 Firmware Update Granularity: No Information Provided 00:13:33.672 Per-Namespace SMART Log: Yes 00:13:33.672 Asymmetric Namespace Access Log Page: Not Supported 00:13:33.672 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:33.672 Command Effects Log Page: Supported 00:13:33.672 Get Log Page Extended Data: Supported 00:13:33.672 Telemetry Log Pages: Not Supported 00:13:33.672 Persistent Event Log Pages: Not Supported 00:13:33.672 Supported Log Pages Log Page: May Support 00:13:33.672 Commands Supported & Effects Log Page: Not Supported 00:13:33.672 Feature Identifiers & Effects Log Page:May Support 00:13:33.672 NVMe-MI Commands & Effects Log Page: May Support 00:13:33.672 Data Area 4 for Telemetry Log: Not Supported 00:13:33.672 Error Log Page Entries Supported: 1 00:13:33.672 Keep Alive: Not Supported 00:13:33.672 00:13:33.672 NVM Command Set Attributes 00:13:33.672 ========================== 00:13:33.672 Submission Queue Entry Size 00:13:33.672 Max: 64 00:13:33.672 Min: 64 00:13:33.672 Completion Queue Entry Size 00:13:33.672 Max: 16 00:13:33.672 Min: 16 00:13:33.672 Number of Namespaces: 256 00:13:33.672 Compare Command: Supported 00:13:33.672 Write Uncorrectable Command: Not Supported 00:13:33.672 Dataset Management Command: Supported 00:13:33.672 Write Zeroes Command: Supported 00:13:33.672 Set Features Save Field: Supported 00:13:33.672 Reservations: Not Supported 00:13:33.672 Timestamp: Supported 00:13:33.672 Copy: Supported 00:13:33.672 Volatile Write Cache: Present 00:13:33.672 Atomic Write Unit (Normal): 1 00:13:33.672 Atomic Write Unit (PFail): 1 00:13:33.672 Atomic Compare & Write Unit: 1 00:13:33.672 Fused Compare & Write: Not Supported 00:13:33.672 Scatter-Gather List 00:13:33.672 SGL Command Set: Supported 00:13:33.672 SGL Keyed: Not Supported 00:13:33.672 SGL Bit Bucket Descriptor: Not Supported 00:13:33.672 SGL Metadata Pointer: Not Supported 00:13:33.672 Oversized SGL: Not Supported 00:13:33.672 SGL Metadata Address: Not Supported 00:13:33.672 SGL Offset: Not Supported 00:13:33.672 Transport SGL Data Block: Not Supported 00:13:33.672 Replay Protected Memory Block: Not Supported 00:13:33.672 00:13:33.672 Firmware Slot Information 00:13:33.672 ========================= 00:13:33.672 Active slot: 1 00:13:33.672 Slot 1 Firmware Revision: 1.0 00:13:33.672 00:13:33.672 00:13:33.672 Commands Supported and Effects 00:13:33.672 ============================== 00:13:33.672 Admin Commands 00:13:33.672 -------------- 00:13:33.672 Delete I/O Submission Queue (00h): Supported 00:13:33.672 Create I/O Submission Queue (01h): Supported 00:13:33.672 Get Log Page (02h): Supported 00:13:33.672 Delete I/O Completion Queue (04h): Supported 00:13:33.672 Create I/O Completion Queue (05h): Supported 00:13:33.672 Identify (06h): Supported 00:13:33.672 Abort (08h): Supported 00:13:33.672 Set Features (09h): Supported 00:13:33.672 Get Features (0Ah): Supported 00:13:33.672 Asynchronous Event Request (0Ch): Supported 00:13:33.672 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:33.672 Directive Send (19h): Supported 00:13:33.672 Directive Receive (1Ah): Supported 00:13:33.672 Virtualization Management (1Ch): Supported 00:13:33.672 Doorbell Buffer Config (7Ch): Supported 00:13:33.672 Format NVM (80h): Supported LBA-Change 00:13:33.672 I/O Commands 00:13:33.672 ------------ 00:13:33.672 Flush (00h): Supported LBA-Change 00:13:33.672 Write (01h): Supported LBA-Change 00:13:33.672 Read (02h): Supported 00:13:33.672 Compare (05h): Supported 00:13:33.672 Write Zeroes (08h): Supported LBA-Change 00:13:33.672 Dataset Management (09h): Supported LBA-Change 00:13:33.672 Unknown (0Ch): Supported 00:13:33.672 Unknown (12h): Supported 00:13:33.672 Copy (19h): Supported LBA-Change 00:13:33.672 Unknown (1Dh): Supported LBA-Change 00:13:33.672 00:13:33.672 Error Log 00:13:33.672 ========= 00:13:33.672 00:13:33.672 Arbitration 00:13:33.672 =========== 00:13:33.672 Arbitration Burst: no limit 00:13:33.672 00:13:33.672 Power Management 00:13:33.672 ================ 00:13:33.672 Number of Power States: 1 00:13:33.672 Current Power State: Power State #0 00:13:33.672 Power State #0: 00:13:33.672 Max Power: 25.00 W 00:13:33.672 Non-Operational State: Operational 00:13:33.672 Entry Latency: 16 microseconds 00:13:33.672 Exit Latency: 4 microseconds 00:13:33.672 Relative Read Throughput: 0 00:13:33.672 Relative Read Latency: 0 00:13:33.672 Relative Write Throughput: 0 00:13:33.672 Relative Write Latency: 0 00:13:33.672 Idle Power: Not Reported 00:13:33.672 Active Power: Not Reported 00:13:33.672 Non-Operational Permissive Mode: Not Supported 00:13:33.672 00:13:33.672 Health Information 00:13:33.672 ================== 00:13:33.672 Critical Warnings: 00:13:33.672 Available Spare Space: OK 00:13:33.672 Temperature: OK 00:13:33.672 Device Reliability: OK 00:13:33.672 Read Only: No 00:13:33.672 Volatile Memory Backup: OK 00:13:33.672 Current Temperature: 323 Kelvin (50 Celsius) 00:13:33.672 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:33.672 Available Spare: 0% 00:13:33.672 Available Spare Threshold: 0% 00:13:33.672 Life Percentage Used: 0% 00:13:33.672 Data Units Read: 728 00:13:33.672 Data Units Written: 656 00:13:33.672 Host Read Commands: 32283 00:13:33.672 Host Write Commands: 32069 00:13:33.672 Controller Busy Time: 0 minutes 00:13:33.672 Power Cycles: 0 00:13:33.672 Power On Hours: 0 hours 00:13:33.672 Unsafe Shutdowns: 0 00:13:33.672 Unrecoverable Media Errors: 0 00:13:33.672 Lifetime Error Log Entries: 0 00:13:33.672 Warning Temperature Time: 0 minutes 00:13:33.672 Critical Temperature Time: 0 minutes 00:13:33.672 00:13:33.672 Number of Queues 00:13:33.672 ================ 00:13:33.672 Number of I/O Submission Queues: 64 00:13:33.672 Number of I/O Completion Queues: 64 00:13:33.672 00:13:33.672 ZNS Specific Controller Data 00:13:33.672 ============================ 00:13:33.672 Zone Append Size Limit: 0 00:13:33.672 00:13:33.672 00:13:33.672 Active Namespaces 00:13:33.672 ================= 00:13:33.672 Namespace ID:1 00:13:33.672 Error Recovery Timeout: Unlimited 00:13:33.672 Command Set Identifier: NVM (00h) 00:13:33.672 Deallocate: Supported 00:13:33.672 Deallocated/Unwritten Error: Supported 00:13:33.672 Deallocated Read Value: All 0x00 00:13:33.672 Deallocate in Write Zeroes: Not Supported 00:13:33.672 Deallocated Guard Field: 0xFFFF 00:13:33.672 Flush: Supported 00:13:33.672 Reservation: Not Supported 00:13:33.672 Metadata Transferred as: Separate Metadata Buffer 00:13:33.672 Namespace Sharing Capabilities: Private 00:13:33.672 Size (in LBAs): 1548666 (5GiB) 00:13:33.672 Capacity (in LBAs): 1548666 (5GiB) 00:13:33.672 Utilization (in LBAs): 1548666 (5GiB) 00:13:33.672 Thin Provisioning: Not Supported 00:13:33.672 Per-NS Atomic Units: No 00:13:33.672 Maximum Single Source Range Length: 128 00:13:33.672 Maximum Copy Length: 128 00:13:33.672 Maximum Source Range Count: 128 00:13:33.672 NGUID/EUI64 Never Reused: No 00:13:33.672 Namespace Write Protected: No 00:13:33.672 Number of LBA Formats: 8 00:13:33.672 Current LBA Format: LBA Format #07 00:13:33.672 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.672 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:33.672 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:33.672 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:33.672 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:33.673 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:33.673 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:33.673 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:33.673 00:13:33.673 NVM Specific Namespace Data 00:13:33.673 =========================== 00:13:33.673 Logical Block Storage Tag Mask: 0 00:13:33.673 Protection Information Capabilities: 00:13:33.673 16b Guard Protection Information Storage Tag Support: No 00:13:33.673 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:33.673 Storage Tag Check Read Support: No 00:13:33.673 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.673 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.673 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.673 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.673 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.673 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.673 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.673 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.673 18:17:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:33.673 18:17:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:13:33.931 ===================================================== 00:13:33.931 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:33.931 ===================================================== 00:13:33.931 Controller Capabilities/Features 00:13:33.931 ================================ 00:13:33.931 Vendor ID: 1b36 00:13:33.931 Subsystem Vendor ID: 1af4 00:13:33.931 Serial Number: 12341 00:13:33.931 Model Number: QEMU NVMe Ctrl 00:13:33.931 Firmware Version: 8.0.0 00:13:33.931 Recommended Arb Burst: 6 00:13:33.931 IEEE OUI Identifier: 00 54 52 00:13:33.931 Multi-path I/O 00:13:33.931 May have multiple subsystem ports: No 00:13:33.931 May have multiple controllers: No 00:13:33.931 Associated with SR-IOV VF: No 00:13:33.931 Max Data Transfer Size: 524288 00:13:33.931 Max Number of Namespaces: 256 00:13:33.931 Max Number of I/O Queues: 64 00:13:33.931 NVMe Specification Version (VS): 1.4 00:13:33.931 NVMe Specification Version (Identify): 1.4 00:13:33.931 Maximum Queue Entries: 2048 00:13:33.931 Contiguous Queues Required: Yes 00:13:33.931 Arbitration Mechanisms Supported 00:13:33.931 Weighted Round Robin: Not Supported 00:13:33.931 Vendor Specific: Not Supported 00:13:33.931 Reset Timeout: 7500 ms 00:13:33.931 Doorbell Stride: 4 bytes 00:13:33.931 NVM Subsystem Reset: Not Supported 00:13:33.931 Command Sets Supported 00:13:33.931 NVM Command Set: Supported 00:13:33.931 Boot Partition: Not Supported 00:13:33.932 Memory Page Size Minimum: 4096 bytes 00:13:33.932 Memory Page Size Maximum: 65536 bytes 00:13:33.932 Persistent Memory Region: Not Supported 00:13:33.932 Optional Asynchronous Events Supported 00:13:33.932 Namespace Attribute Notices: Supported 00:13:33.932 Firmware Activation Notices: Not Supported 00:13:33.932 ANA Change Notices: Not Supported 00:13:33.932 PLE Aggregate Log Change Notices: Not Supported 00:13:33.932 LBA Status Info Alert Notices: Not Supported 00:13:33.932 EGE Aggregate Log Change Notices: Not Supported 00:13:33.932 Normal NVM Subsystem Shutdown event: Not Supported 00:13:33.932 Zone Descriptor Change Notices: Not Supported 00:13:33.932 Discovery Log Change Notices: Not Supported 00:13:33.932 Controller Attributes 00:13:33.932 128-bit Host Identifier: Not Supported 00:13:33.932 Non-Operational Permissive Mode: Not Supported 00:13:33.932 NVM Sets: Not Supported 00:13:33.932 Read Recovery Levels: Not Supported 00:13:33.932 Endurance Groups: Not Supported 00:13:33.932 Predictable Latency Mode: Not Supported 00:13:33.932 Traffic Based Keep ALive: Not Supported 00:13:33.932 Namespace Granularity: Not Supported 00:13:33.932 SQ Associations: Not Supported 00:13:33.932 UUID List: Not Supported 00:13:33.932 Multi-Domain Subsystem: Not Supported 00:13:33.932 Fixed Capacity Management: Not Supported 00:13:33.932 Variable Capacity Management: Not Supported 00:13:33.932 Delete Endurance Group: Not Supported 00:13:33.932 Delete NVM Set: Not Supported 00:13:33.932 Extended LBA Formats Supported: Supported 00:13:33.932 Flexible Data Placement Supported: Not Supported 00:13:33.932 00:13:33.932 Controller Memory Buffer Support 00:13:33.932 ================================ 00:13:33.932 Supported: No 00:13:33.932 00:13:33.932 Persistent Memory Region Support 00:13:33.932 ================================ 00:13:33.932 Supported: No 00:13:33.932 00:13:33.932 Admin Command Set Attributes 00:13:33.932 ============================ 00:13:33.932 Security Send/Receive: Not Supported 00:13:33.932 Format NVM: Supported 00:13:33.932 Firmware Activate/Download: Not Supported 00:13:33.932 Namespace Management: Supported 00:13:33.932 Device Self-Test: Not Supported 00:13:33.932 Directives: Supported 00:13:33.932 NVMe-MI: Not Supported 00:13:33.932 Virtualization Management: Not Supported 00:13:33.932 Doorbell Buffer Config: Supported 00:13:33.932 Get LBA Status Capability: Not Supported 00:13:33.932 Command & Feature Lockdown Capability: Not Supported 00:13:33.932 Abort Command Limit: 4 00:13:33.932 Async Event Request Limit: 4 00:13:33.932 Number of Firmware Slots: N/A 00:13:33.932 Firmware Slot 1 Read-Only: N/A 00:13:33.932 Firmware Activation Without Reset: N/A 00:13:33.932 Multiple Update Detection Support: N/A 00:13:33.932 Firmware Update Granularity: No Information Provided 00:13:33.932 Per-Namespace SMART Log: Yes 00:13:33.932 Asymmetric Namespace Access Log Page: Not Supported 00:13:33.932 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:33.932 Command Effects Log Page: Supported 00:13:33.932 Get Log Page Extended Data: Supported 00:13:33.932 Telemetry Log Pages: Not Supported 00:13:33.932 Persistent Event Log Pages: Not Supported 00:13:33.932 Supported Log Pages Log Page: May Support 00:13:33.932 Commands Supported & Effects Log Page: Not Supported 00:13:33.932 Feature Identifiers & Effects Log Page:May Support 00:13:33.932 NVMe-MI Commands & Effects Log Page: May Support 00:13:33.932 Data Area 4 for Telemetry Log: Not Supported 00:13:33.932 Error Log Page Entries Supported: 1 00:13:33.932 Keep Alive: Not Supported 00:13:33.932 00:13:33.932 NVM Command Set Attributes 00:13:33.932 ========================== 00:13:33.932 Submission Queue Entry Size 00:13:33.932 Max: 64 00:13:33.932 Min: 64 00:13:33.932 Completion Queue Entry Size 00:13:33.932 Max: 16 00:13:33.932 Min: 16 00:13:33.932 Number of Namespaces: 256 00:13:33.932 Compare Command: Supported 00:13:33.932 Write Uncorrectable Command: Not Supported 00:13:33.932 Dataset Management Command: Supported 00:13:33.932 Write Zeroes Command: Supported 00:13:33.932 Set Features Save Field: Supported 00:13:33.932 Reservations: Not Supported 00:13:33.932 Timestamp: Supported 00:13:33.932 Copy: Supported 00:13:33.932 Volatile Write Cache: Present 00:13:33.932 Atomic Write Unit (Normal): 1 00:13:33.932 Atomic Write Unit (PFail): 1 00:13:33.932 Atomic Compare & Write Unit: 1 00:13:33.932 Fused Compare & Write: Not Supported 00:13:33.932 Scatter-Gather List 00:13:33.932 SGL Command Set: Supported 00:13:33.932 SGL Keyed: Not Supported 00:13:33.932 SGL Bit Bucket Descriptor: Not Supported 00:13:33.932 SGL Metadata Pointer: Not Supported 00:13:33.932 Oversized SGL: Not Supported 00:13:33.932 SGL Metadata Address: Not Supported 00:13:33.932 SGL Offset: Not Supported 00:13:33.932 Transport SGL Data Block: Not Supported 00:13:33.932 Replay Protected Memory Block: Not Supported 00:13:33.932 00:13:33.932 Firmware Slot Information 00:13:33.932 ========================= 00:13:33.932 Active slot: 1 00:13:33.932 Slot 1 Firmware Revision: 1.0 00:13:33.932 00:13:33.932 00:13:33.932 Commands Supported and Effects 00:13:33.932 ============================== 00:13:33.932 Admin Commands 00:13:33.932 -------------- 00:13:33.932 Delete I/O Submission Queue (00h): Supported 00:13:33.932 Create I/O Submission Queue (01h): Supported 00:13:33.932 Get Log Page (02h): Supported 00:13:33.932 Delete I/O Completion Queue (04h): Supported 00:13:33.932 Create I/O Completion Queue (05h): Supported 00:13:33.932 Identify (06h): Supported 00:13:33.932 Abort (08h): Supported 00:13:33.932 Set Features (09h): Supported 00:13:33.932 Get Features (0Ah): Supported 00:13:33.932 Asynchronous Event Request (0Ch): Supported 00:13:33.932 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:33.932 Directive Send (19h): Supported 00:13:33.932 Directive Receive (1Ah): Supported 00:13:33.932 Virtualization Management (1Ch): Supported 00:13:33.932 Doorbell Buffer Config (7Ch): Supported 00:13:33.932 Format NVM (80h): Supported LBA-Change 00:13:33.932 I/O Commands 00:13:33.932 ------------ 00:13:33.932 Flush (00h): Supported LBA-Change 00:13:33.932 Write (01h): Supported LBA-Change 00:13:33.932 Read (02h): Supported 00:13:33.932 Compare (05h): Supported 00:13:33.932 Write Zeroes (08h): Supported LBA-Change 00:13:33.932 Dataset Management (09h): Supported LBA-Change 00:13:33.932 Unknown (0Ch): Supported 00:13:33.932 Unknown (12h): Supported 00:13:33.932 Copy (19h): Supported LBA-Change 00:13:33.932 Unknown (1Dh): Supported LBA-Change 00:13:33.932 00:13:33.932 Error Log 00:13:33.932 ========= 00:13:33.932 00:13:33.932 Arbitration 00:13:33.932 =========== 00:13:33.932 Arbitration Burst: no limit 00:13:33.932 00:13:33.932 Power Management 00:13:33.932 ================ 00:13:33.932 Number of Power States: 1 00:13:33.932 Current Power State: Power State #0 00:13:33.932 Power State #0: 00:13:33.932 Max Power: 25.00 W 00:13:33.932 Non-Operational State: Operational 00:13:33.932 Entry Latency: 16 microseconds 00:13:33.932 Exit Latency: 4 microseconds 00:13:33.932 Relative Read Throughput: 0 00:13:33.932 Relative Read Latency: 0 00:13:33.932 Relative Write Throughput: 0 00:13:33.932 Relative Write Latency: 0 00:13:33.932 Idle Power: Not Reported 00:13:33.932 Active Power: Not Reported 00:13:33.932 Non-Operational Permissive Mode: Not Supported 00:13:33.932 00:13:33.932 Health Information 00:13:33.932 ================== 00:13:33.932 Critical Warnings: 00:13:33.932 Available Spare Space: OK 00:13:33.932 Temperature: OK 00:13:33.932 Device Reliability: OK 00:13:33.932 Read Only: No 00:13:33.932 Volatile Memory Backup: OK 00:13:33.932 Current Temperature: 323 Kelvin (50 Celsius) 00:13:33.932 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:33.932 Available Spare: 0% 00:13:33.932 Available Spare Threshold: 0% 00:13:33.932 Life Percentage Used: 0% 00:13:33.932 Data Units Read: 1070 00:13:33.932 Data Units Written: 937 00:13:33.932 Host Read Commands: 47085 00:13:33.932 Host Write Commands: 45852 00:13:33.932 Controller Busy Time: 0 minutes 00:13:33.932 Power Cycles: 0 00:13:33.932 Power On Hours: 0 hours 00:13:33.932 Unsafe Shutdowns: 0 00:13:33.932 Unrecoverable Media Errors: 0 00:13:33.932 Lifetime Error Log Entries: 0 00:13:33.932 Warning Temperature Time: 0 minutes 00:13:33.932 Critical Temperature Time: 0 minutes 00:13:33.932 00:13:33.932 Number of Queues 00:13:33.932 ================ 00:13:33.933 Number of I/O Submission Queues: 64 00:13:33.933 Number of I/O Completion Queues: 64 00:13:33.933 00:13:33.933 ZNS Specific Controller Data 00:13:33.933 ============================ 00:13:33.933 Zone Append Size Limit: 0 00:13:33.933 00:13:33.933 00:13:33.933 Active Namespaces 00:13:33.933 ================= 00:13:33.933 Namespace ID:1 00:13:33.933 Error Recovery Timeout: Unlimited 00:13:33.933 Command Set Identifier: NVM (00h) 00:13:33.933 Deallocate: Supported 00:13:33.933 Deallocated/Unwritten Error: Supported 00:13:33.933 Deallocated Read Value: All 0x00 00:13:33.933 Deallocate in Write Zeroes: Not Supported 00:13:33.933 Deallocated Guard Field: 0xFFFF 00:13:33.933 Flush: Supported 00:13:33.933 Reservation: Not Supported 00:13:33.933 Namespace Sharing Capabilities: Private 00:13:33.933 Size (in LBAs): 1310720 (5GiB) 00:13:33.933 Capacity (in LBAs): 1310720 (5GiB) 00:13:33.933 Utilization (in LBAs): 1310720 (5GiB) 00:13:33.933 Thin Provisioning: Not Supported 00:13:33.933 Per-NS Atomic Units: No 00:13:33.933 Maximum Single Source Range Length: 128 00:13:33.933 Maximum Copy Length: 128 00:13:33.933 Maximum Source Range Count: 128 00:13:33.933 NGUID/EUI64 Never Reused: No 00:13:33.933 Namespace Write Protected: No 00:13:33.933 Number of LBA Formats: 8 00:13:33.933 Current LBA Format: LBA Format #04 00:13:33.933 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.933 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:33.933 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:33.933 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:33.933 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:33.933 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:33.933 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:33.933 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:33.933 00:13:33.933 NVM Specific Namespace Data 00:13:33.933 =========================== 00:13:33.933 Logical Block Storage Tag Mask: 0 00:13:33.933 Protection Information Capabilities: 00:13:33.933 16b Guard Protection Information Storage Tag Support: No 00:13:33.933 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:33.933 Storage Tag Check Read Support: No 00:13:33.933 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.933 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.933 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.933 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.933 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.933 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.933 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.933 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:33.933 18:17:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:33.933 18:17:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:13:34.193 ===================================================== 00:13:34.193 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:34.193 ===================================================== 00:13:34.193 Controller Capabilities/Features 00:13:34.193 ================================ 00:13:34.193 Vendor ID: 1b36 00:13:34.193 Subsystem Vendor ID: 1af4 00:13:34.193 Serial Number: 12342 00:13:34.193 Model Number: QEMU NVMe Ctrl 00:13:34.193 Firmware Version: 8.0.0 00:13:34.193 Recommended Arb Burst: 6 00:13:34.193 IEEE OUI Identifier: 00 54 52 00:13:34.193 Multi-path I/O 00:13:34.193 May have multiple subsystem ports: No 00:13:34.193 May have multiple controllers: No 00:13:34.193 Associated with SR-IOV VF: No 00:13:34.193 Max Data Transfer Size: 524288 00:13:34.193 Max Number of Namespaces: 256 00:13:34.193 Max Number of I/O Queues: 64 00:13:34.193 NVMe Specification Version (VS): 1.4 00:13:34.193 NVMe Specification Version (Identify): 1.4 00:13:34.193 Maximum Queue Entries: 2048 00:13:34.193 Contiguous Queues Required: Yes 00:13:34.193 Arbitration Mechanisms Supported 00:13:34.193 Weighted Round Robin: Not Supported 00:13:34.193 Vendor Specific: Not Supported 00:13:34.193 Reset Timeout: 7500 ms 00:13:34.193 Doorbell Stride: 4 bytes 00:13:34.193 NVM Subsystem Reset: Not Supported 00:13:34.193 Command Sets Supported 00:13:34.193 NVM Command Set: Supported 00:13:34.193 Boot Partition: Not Supported 00:13:34.193 Memory Page Size Minimum: 4096 bytes 00:13:34.193 Memory Page Size Maximum: 65536 bytes 00:13:34.193 Persistent Memory Region: Not Supported 00:13:34.193 Optional Asynchronous Events Supported 00:13:34.193 Namespace Attribute Notices: Supported 00:13:34.193 Firmware Activation Notices: Not Supported 00:13:34.193 ANA Change Notices: Not Supported 00:13:34.193 PLE Aggregate Log Change Notices: Not Supported 00:13:34.193 LBA Status Info Alert Notices: Not Supported 00:13:34.193 EGE Aggregate Log Change Notices: Not Supported 00:13:34.193 Normal NVM Subsystem Shutdown event: Not Supported 00:13:34.193 Zone Descriptor Change Notices: Not Supported 00:13:34.193 Discovery Log Change Notices: Not Supported 00:13:34.193 Controller Attributes 00:13:34.193 128-bit Host Identifier: Not Supported 00:13:34.193 Non-Operational Permissive Mode: Not Supported 00:13:34.193 NVM Sets: Not Supported 00:13:34.193 Read Recovery Levels: Not Supported 00:13:34.193 Endurance Groups: Not Supported 00:13:34.193 Predictable Latency Mode: Not Supported 00:13:34.193 Traffic Based Keep ALive: Not Supported 00:13:34.193 Namespace Granularity: Not Supported 00:13:34.193 SQ Associations: Not Supported 00:13:34.193 UUID List: Not Supported 00:13:34.193 Multi-Domain Subsystem: Not Supported 00:13:34.193 Fixed Capacity Management: Not Supported 00:13:34.193 Variable Capacity Management: Not Supported 00:13:34.193 Delete Endurance Group: Not Supported 00:13:34.193 Delete NVM Set: Not Supported 00:13:34.193 Extended LBA Formats Supported: Supported 00:13:34.193 Flexible Data Placement Supported: Not Supported 00:13:34.193 00:13:34.193 Controller Memory Buffer Support 00:13:34.193 ================================ 00:13:34.193 Supported: No 00:13:34.193 00:13:34.193 Persistent Memory Region Support 00:13:34.193 ================================ 00:13:34.193 Supported: No 00:13:34.193 00:13:34.193 Admin Command Set Attributes 00:13:34.193 ============================ 00:13:34.193 Security Send/Receive: Not Supported 00:13:34.194 Format NVM: Supported 00:13:34.194 Firmware Activate/Download: Not Supported 00:13:34.194 Namespace Management: Supported 00:13:34.194 Device Self-Test: Not Supported 00:13:34.194 Directives: Supported 00:13:34.194 NVMe-MI: Not Supported 00:13:34.194 Virtualization Management: Not Supported 00:13:34.194 Doorbell Buffer Config: Supported 00:13:34.194 Get LBA Status Capability: Not Supported 00:13:34.194 Command & Feature Lockdown Capability: Not Supported 00:13:34.194 Abort Command Limit: 4 00:13:34.194 Async Event Request Limit: 4 00:13:34.194 Number of Firmware Slots: N/A 00:13:34.194 Firmware Slot 1 Read-Only: N/A 00:13:34.194 Firmware Activation Without Reset: N/A 00:13:34.194 Multiple Update Detection Support: N/A 00:13:34.194 Firmware Update Granularity: No Information Provided 00:13:34.194 Per-Namespace SMART Log: Yes 00:13:34.194 Asymmetric Namespace Access Log Page: Not Supported 00:13:34.194 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:34.194 Command Effects Log Page: Supported 00:13:34.194 Get Log Page Extended Data: Supported 00:13:34.194 Telemetry Log Pages: Not Supported 00:13:34.194 Persistent Event Log Pages: Not Supported 00:13:34.194 Supported Log Pages Log Page: May Support 00:13:34.194 Commands Supported & Effects Log Page: Not Supported 00:13:34.194 Feature Identifiers & Effects Log Page:May Support 00:13:34.194 NVMe-MI Commands & Effects Log Page: May Support 00:13:34.194 Data Area 4 for Telemetry Log: Not Supported 00:13:34.194 Error Log Page Entries Supported: 1 00:13:34.194 Keep Alive: Not Supported 00:13:34.194 00:13:34.194 NVM Command Set Attributes 00:13:34.194 ========================== 00:13:34.194 Submission Queue Entry Size 00:13:34.194 Max: 64 00:13:34.194 Min: 64 00:13:34.194 Completion Queue Entry Size 00:13:34.194 Max: 16 00:13:34.194 Min: 16 00:13:34.194 Number of Namespaces: 256 00:13:34.194 Compare Command: Supported 00:13:34.194 Write Uncorrectable Command: Not Supported 00:13:34.194 Dataset Management Command: Supported 00:13:34.194 Write Zeroes Command: Supported 00:13:34.194 Set Features Save Field: Supported 00:13:34.194 Reservations: Not Supported 00:13:34.194 Timestamp: Supported 00:13:34.194 Copy: Supported 00:13:34.194 Volatile Write Cache: Present 00:13:34.194 Atomic Write Unit (Normal): 1 00:13:34.194 Atomic Write Unit (PFail): 1 00:13:34.194 Atomic Compare & Write Unit: 1 00:13:34.194 Fused Compare & Write: Not Supported 00:13:34.194 Scatter-Gather List 00:13:34.194 SGL Command Set: Supported 00:13:34.194 SGL Keyed: Not Supported 00:13:34.194 SGL Bit Bucket Descriptor: Not Supported 00:13:34.194 SGL Metadata Pointer: Not Supported 00:13:34.194 Oversized SGL: Not Supported 00:13:34.194 SGL Metadata Address: Not Supported 00:13:34.194 SGL Offset: Not Supported 00:13:34.194 Transport SGL Data Block: Not Supported 00:13:34.194 Replay Protected Memory Block: Not Supported 00:13:34.194 00:13:34.194 Firmware Slot Information 00:13:34.194 ========================= 00:13:34.194 Active slot: 1 00:13:34.194 Slot 1 Firmware Revision: 1.0 00:13:34.194 00:13:34.194 00:13:34.194 Commands Supported and Effects 00:13:34.194 ============================== 00:13:34.194 Admin Commands 00:13:34.194 -------------- 00:13:34.194 Delete I/O Submission Queue (00h): Supported 00:13:34.194 Create I/O Submission Queue (01h): Supported 00:13:34.194 Get Log Page (02h): Supported 00:13:34.194 Delete I/O Completion Queue (04h): Supported 00:13:34.194 Create I/O Completion Queue (05h): Supported 00:13:34.194 Identify (06h): Supported 00:13:34.194 Abort (08h): Supported 00:13:34.194 Set Features (09h): Supported 00:13:34.194 Get Features (0Ah): Supported 00:13:34.194 Asynchronous Event Request (0Ch): Supported 00:13:34.194 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:34.194 Directive Send (19h): Supported 00:13:34.194 Directive Receive (1Ah): Supported 00:13:34.194 Virtualization Management (1Ch): Supported 00:13:34.194 Doorbell Buffer Config (7Ch): Supported 00:13:34.194 Format NVM (80h): Supported LBA-Change 00:13:34.194 I/O Commands 00:13:34.194 ------------ 00:13:34.194 Flush (00h): Supported LBA-Change 00:13:34.194 Write (01h): Supported LBA-Change 00:13:34.194 Read (02h): Supported 00:13:34.194 Compare (05h): Supported 00:13:34.194 Write Zeroes (08h): Supported LBA-Change 00:13:34.194 Dataset Management (09h): Supported LBA-Change 00:13:34.194 Unknown (0Ch): Supported 00:13:34.194 Unknown (12h): Supported 00:13:34.194 Copy (19h): Supported LBA-Change 00:13:34.194 Unknown (1Dh): Supported LBA-Change 00:13:34.194 00:13:34.194 Error Log 00:13:34.194 ========= 00:13:34.194 00:13:34.194 Arbitration 00:13:34.194 =========== 00:13:34.194 Arbitration Burst: no limit 00:13:34.194 00:13:34.194 Power Management 00:13:34.194 ================ 00:13:34.194 Number of Power States: 1 00:13:34.194 Current Power State: Power State #0 00:13:34.194 Power State #0: 00:13:34.194 Max Power: 25.00 W 00:13:34.194 Non-Operational State: Operational 00:13:34.194 Entry Latency: 16 microseconds 00:13:34.194 Exit Latency: 4 microseconds 00:13:34.194 Relative Read Throughput: 0 00:13:34.194 Relative Read Latency: 0 00:13:34.194 Relative Write Throughput: 0 00:13:34.194 Relative Write Latency: 0 00:13:34.194 Idle Power: Not Reported 00:13:34.194 Active Power: Not Reported 00:13:34.194 Non-Operational Permissive Mode: Not Supported 00:13:34.194 00:13:34.194 Health Information 00:13:34.194 ================== 00:13:34.194 Critical Warnings: 00:13:34.194 Available Spare Space: OK 00:13:34.194 Temperature: OK 00:13:34.194 Device Reliability: OK 00:13:34.194 Read Only: No 00:13:34.194 Volatile Memory Backup: OK 00:13:34.194 Current Temperature: 323 Kelvin (50 Celsius) 00:13:34.194 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:34.194 Available Spare: 0% 00:13:34.194 Available Spare Threshold: 0% 00:13:34.194 Life Percentage Used: 0% 00:13:34.194 Data Units Read: 2221 00:13:34.194 Data Units Written: 2009 00:13:34.194 Host Read Commands: 98113 00:13:34.194 Host Write Commands: 96382 00:13:34.194 Controller Busy Time: 0 minutes 00:13:34.194 Power Cycles: 0 00:13:34.194 Power On Hours: 0 hours 00:13:34.194 Unsafe Shutdowns: 0 00:13:34.194 Unrecoverable Media Errors: 0 00:13:34.194 Lifetime Error Log Entries: 0 00:13:34.194 Warning Temperature Time: 0 minutes 00:13:34.194 Critical Temperature Time: 0 minutes 00:13:34.194 00:13:34.194 Number of Queues 00:13:34.194 ================ 00:13:34.194 Number of I/O Submission Queues: 64 00:13:34.194 Number of I/O Completion Queues: 64 00:13:34.194 00:13:34.194 ZNS Specific Controller Data 00:13:34.194 ============================ 00:13:34.194 Zone Append Size Limit: 0 00:13:34.194 00:13:34.194 00:13:34.194 Active Namespaces 00:13:34.194 ================= 00:13:34.194 Namespace ID:1 00:13:34.194 Error Recovery Timeout: Unlimited 00:13:34.194 Command Set Identifier: NVM (00h) 00:13:34.194 Deallocate: Supported 00:13:34.194 Deallocated/Unwritten Error: Supported 00:13:34.194 Deallocated Read Value: All 0x00 00:13:34.194 Deallocate in Write Zeroes: Not Supported 00:13:34.194 Deallocated Guard Field: 0xFFFF 00:13:34.194 Flush: Supported 00:13:34.194 Reservation: Not Supported 00:13:34.194 Namespace Sharing Capabilities: Private 00:13:34.194 Size (in LBAs): 1048576 (4GiB) 00:13:34.194 Capacity (in LBAs): 1048576 (4GiB) 00:13:34.194 Utilization (in LBAs): 1048576 (4GiB) 00:13:34.194 Thin Provisioning: Not Supported 00:13:34.194 Per-NS Atomic Units: No 00:13:34.194 Maximum Single Source Range Length: 128 00:13:34.194 Maximum Copy Length: 128 00:13:34.194 Maximum Source Range Count: 128 00:13:34.194 NGUID/EUI64 Never Reused: No 00:13:34.194 Namespace Write Protected: No 00:13:34.194 Number of LBA Formats: 8 00:13:34.194 Current LBA Format: LBA Format #04 00:13:34.194 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:34.194 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:34.195 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:34.195 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:34.195 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:34.195 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:34.195 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:34.195 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:34.195 00:13:34.195 NVM Specific Namespace Data 00:13:34.195 =========================== 00:13:34.195 Logical Block Storage Tag Mask: 0 00:13:34.195 Protection Information Capabilities: 00:13:34.195 16b Guard Protection Information Storage Tag Support: No 00:13:34.195 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:34.195 Storage Tag Check Read Support: No 00:13:34.195 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Namespace ID:2 00:13:34.195 Error Recovery Timeout: Unlimited 00:13:34.195 Command Set Identifier: NVM (00h) 00:13:34.195 Deallocate: Supported 00:13:34.195 Deallocated/Unwritten Error: Supported 00:13:34.195 Deallocated Read Value: All 0x00 00:13:34.195 Deallocate in Write Zeroes: Not Supported 00:13:34.195 Deallocated Guard Field: 0xFFFF 00:13:34.195 Flush: Supported 00:13:34.195 Reservation: Not Supported 00:13:34.195 Namespace Sharing Capabilities: Private 00:13:34.195 Size (in LBAs): 1048576 (4GiB) 00:13:34.195 Capacity (in LBAs): 1048576 (4GiB) 00:13:34.195 Utilization (in LBAs): 1048576 (4GiB) 00:13:34.195 Thin Provisioning: Not Supported 00:13:34.195 Per-NS Atomic Units: No 00:13:34.195 Maximum Single Source Range Length: 128 00:13:34.195 Maximum Copy Length: 128 00:13:34.195 Maximum Source Range Count: 128 00:13:34.195 NGUID/EUI64 Never Reused: No 00:13:34.195 Namespace Write Protected: No 00:13:34.195 Number of LBA Formats: 8 00:13:34.195 Current LBA Format: LBA Format #04 00:13:34.195 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:34.195 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:34.195 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:34.195 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:34.195 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:34.195 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:34.195 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:34.195 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:34.195 00:13:34.195 NVM Specific Namespace Data 00:13:34.195 =========================== 00:13:34.195 Logical Block Storage Tag Mask: 0 00:13:34.195 Protection Information Capabilities: 00:13:34.195 16b Guard Protection Information Storage Tag Support: No 00:13:34.195 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:34.195 Storage Tag Check Read Support: No 00:13:34.195 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.195 Namespace ID:3 00:13:34.195 Error Recovery Timeout: Unlimited 00:13:34.195 Command Set Identifier: NVM (00h) 00:13:34.195 Deallocate: Supported 00:13:34.195 Deallocated/Unwritten Error: Supported 00:13:34.195 Deallocated Read Value: All 0x00 00:13:34.195 Deallocate in Write Zeroes: Not Supported 00:13:34.195 Deallocated Guard Field: 0xFFFF 00:13:34.195 Flush: Supported 00:13:34.195 Reservation: Not Supported 00:13:34.195 Namespace Sharing Capabilities: Private 00:13:34.195 Size (in LBAs): 1048576 (4GiB) 00:13:34.195 Capacity (in LBAs): 1048576 (4GiB) 00:13:34.195 Utilization (in LBAs): 1048576 (4GiB) 00:13:34.195 Thin Provisioning: Not Supported 00:13:34.195 Per-NS Atomic Units: No 00:13:34.195 Maximum Single Source Range Length: 128 00:13:34.195 Maximum Copy Length: 128 00:13:34.195 Maximum Source Range Count: 128 00:13:34.195 NGUID/EUI64 Never Reused: No 00:13:34.195 Namespace Write Protected: No 00:13:34.195 Number of LBA Formats: 8 00:13:34.195 Current LBA Format: LBA Format #04 00:13:34.195 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:34.195 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:34.195 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:34.195 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:34.195 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:34.195 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:34.195 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:34.195 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:34.195 00:13:34.195 NVM Specific Namespace Data 00:13:34.195 =========================== 00:13:34.195 Logical Block Storage Tag Mask: 0 00:13:34.195 Protection Information Capabilities: 00:13:34.195 16b Guard Protection Information Storage Tag Support: No 00:13:34.195 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:34.454 Storage Tag Check Read Support: No 00:13:34.454 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.454 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.454 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.454 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.454 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.454 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.454 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.454 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.454 18:17:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:34.454 18:17:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:13:34.725 ===================================================== 00:13:34.725 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:34.725 ===================================================== 00:13:34.725 Controller Capabilities/Features 00:13:34.725 ================================ 00:13:34.725 Vendor ID: 1b36 00:13:34.725 Subsystem Vendor ID: 1af4 00:13:34.725 Serial Number: 12343 00:13:34.725 Model Number: QEMU NVMe Ctrl 00:13:34.725 Firmware Version: 8.0.0 00:13:34.725 Recommended Arb Burst: 6 00:13:34.725 IEEE OUI Identifier: 00 54 52 00:13:34.725 Multi-path I/O 00:13:34.725 May have multiple subsystem ports: No 00:13:34.725 May have multiple controllers: Yes 00:13:34.725 Associated with SR-IOV VF: No 00:13:34.725 Max Data Transfer Size: 524288 00:13:34.725 Max Number of Namespaces: 256 00:13:34.725 Max Number of I/O Queues: 64 00:13:34.725 NVMe Specification Version (VS): 1.4 00:13:34.725 NVMe Specification Version (Identify): 1.4 00:13:34.725 Maximum Queue Entries: 2048 00:13:34.725 Contiguous Queues Required: Yes 00:13:34.725 Arbitration Mechanisms Supported 00:13:34.725 Weighted Round Robin: Not Supported 00:13:34.725 Vendor Specific: Not Supported 00:13:34.725 Reset Timeout: 7500 ms 00:13:34.725 Doorbell Stride: 4 bytes 00:13:34.725 NVM Subsystem Reset: Not Supported 00:13:34.725 Command Sets Supported 00:13:34.725 NVM Command Set: Supported 00:13:34.725 Boot Partition: Not Supported 00:13:34.725 Memory Page Size Minimum: 4096 bytes 00:13:34.725 Memory Page Size Maximum: 65536 bytes 00:13:34.725 Persistent Memory Region: Not Supported 00:13:34.725 Optional Asynchronous Events Supported 00:13:34.725 Namespace Attribute Notices: Supported 00:13:34.725 Firmware Activation Notices: Not Supported 00:13:34.725 ANA Change Notices: Not Supported 00:13:34.725 PLE Aggregate Log Change Notices: Not Supported 00:13:34.725 LBA Status Info Alert Notices: Not Supported 00:13:34.725 EGE Aggregate Log Change Notices: Not Supported 00:13:34.725 Normal NVM Subsystem Shutdown event: Not Supported 00:13:34.725 Zone Descriptor Change Notices: Not Supported 00:13:34.725 Discovery Log Change Notices: Not Supported 00:13:34.725 Controller Attributes 00:13:34.725 128-bit Host Identifier: Not Supported 00:13:34.725 Non-Operational Permissive Mode: Not Supported 00:13:34.725 NVM Sets: Not Supported 00:13:34.725 Read Recovery Levels: Not Supported 00:13:34.725 Endurance Groups: Supported 00:13:34.725 Predictable Latency Mode: Not Supported 00:13:34.725 Traffic Based Keep ALive: Not Supported 00:13:34.725 Namespace Granularity: Not Supported 00:13:34.725 SQ Associations: Not Supported 00:13:34.725 UUID List: Not Supported 00:13:34.726 Multi-Domain Subsystem: Not Supported 00:13:34.726 Fixed Capacity Management: Not Supported 00:13:34.726 Variable Capacity Management: Not Supported 00:13:34.726 Delete Endurance Group: Not Supported 00:13:34.726 Delete NVM Set: Not Supported 00:13:34.726 Extended LBA Formats Supported: Supported 00:13:34.726 Flexible Data Placement Supported: Supported 00:13:34.726 00:13:34.726 Controller Memory Buffer Support 00:13:34.726 ================================ 00:13:34.726 Supported: No 00:13:34.726 00:13:34.726 Persistent Memory Region Support 00:13:34.726 ================================ 00:13:34.726 Supported: No 00:13:34.726 00:13:34.726 Admin Command Set Attributes 00:13:34.726 ============================ 00:13:34.726 Security Send/Receive: Not Supported 00:13:34.726 Format NVM: Supported 00:13:34.726 Firmware Activate/Download: Not Supported 00:13:34.726 Namespace Management: Supported 00:13:34.726 Device Self-Test: Not Supported 00:13:34.726 Directives: Supported 00:13:34.726 NVMe-MI: Not Supported 00:13:34.726 Virtualization Management: Not Supported 00:13:34.726 Doorbell Buffer Config: Supported 00:13:34.726 Get LBA Status Capability: Not Supported 00:13:34.726 Command & Feature Lockdown Capability: Not Supported 00:13:34.726 Abort Command Limit: 4 00:13:34.726 Async Event Request Limit: 4 00:13:34.726 Number of Firmware Slots: N/A 00:13:34.726 Firmware Slot 1 Read-Only: N/A 00:13:34.726 Firmware Activation Without Reset: N/A 00:13:34.726 Multiple Update Detection Support: N/A 00:13:34.726 Firmware Update Granularity: No Information Provided 00:13:34.726 Per-Namespace SMART Log: Yes 00:13:34.726 Asymmetric Namespace Access Log Page: Not Supported 00:13:34.726 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:34.726 Command Effects Log Page: Supported 00:13:34.726 Get Log Page Extended Data: Supported 00:13:34.726 Telemetry Log Pages: Not Supported 00:13:34.726 Persistent Event Log Pages: Not Supported 00:13:34.726 Supported Log Pages Log Page: May Support 00:13:34.726 Commands Supported & Effects Log Page: Not Supported 00:13:34.727 Feature Identifiers & Effects Log Page:May Support 00:13:34.727 NVMe-MI Commands & Effects Log Page: May Support 00:13:34.727 Data Area 4 for Telemetry Log: Not Supported 00:13:34.727 Error Log Page Entries Supported: 1 00:13:34.727 Keep Alive: Not Supported 00:13:34.727 00:13:34.727 NVM Command Set Attributes 00:13:34.727 ========================== 00:13:34.727 Submission Queue Entry Size 00:13:34.727 Max: 64 00:13:34.727 Min: 64 00:13:34.727 Completion Queue Entry Size 00:13:34.727 Max: 16 00:13:34.727 Min: 16 00:13:34.727 Number of Namespaces: 256 00:13:34.727 Compare Command: Supported 00:13:34.727 Write Uncorrectable Command: Not Supported 00:13:34.727 Dataset Management Command: Supported 00:13:34.727 Write Zeroes Command: Supported 00:13:34.727 Set Features Save Field: Supported 00:13:34.727 Reservations: Not Supported 00:13:34.727 Timestamp: Supported 00:13:34.727 Copy: Supported 00:13:34.727 Volatile Write Cache: Present 00:13:34.727 Atomic Write Unit (Normal): 1 00:13:34.727 Atomic Write Unit (PFail): 1 00:13:34.727 Atomic Compare & Write Unit: 1 00:13:34.727 Fused Compare & Write: Not Supported 00:13:34.727 Scatter-Gather List 00:13:34.727 SGL Command Set: Supported 00:13:34.727 SGL Keyed: Not Supported 00:13:34.727 SGL Bit Bucket Descriptor: Not Supported 00:13:34.727 SGL Metadata Pointer: Not Supported 00:13:34.727 Oversized SGL: Not Supported 00:13:34.727 SGL Metadata Address: Not Supported 00:13:34.727 SGL Offset: Not Supported 00:13:34.727 Transport SGL Data Block: Not Supported 00:13:34.727 Replay Protected Memory Block: Not Supported 00:13:34.727 00:13:34.727 Firmware Slot Information 00:13:34.727 ========================= 00:13:34.727 Active slot: 1 00:13:34.727 Slot 1 Firmware Revision: 1.0 00:13:34.727 00:13:34.727 00:13:34.727 Commands Supported and Effects 00:13:34.727 ============================== 00:13:34.727 Admin Commands 00:13:34.727 -------------- 00:13:34.727 Delete I/O Submission Queue (00h): Supported 00:13:34.727 Create I/O Submission Queue (01h): Supported 00:13:34.727 Get Log Page (02h): Supported 00:13:34.727 Delete I/O Completion Queue (04h): Supported 00:13:34.727 Create I/O Completion Queue (05h): Supported 00:13:34.728 Identify (06h): Supported 00:13:34.728 Abort (08h): Supported 00:13:34.728 Set Features (09h): Supported 00:13:34.728 Get Features (0Ah): Supported 00:13:34.728 Asynchronous Event Request (0Ch): Supported 00:13:34.728 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:34.728 Directive Send (19h): Supported 00:13:34.728 Directive Receive (1Ah): Supported 00:13:34.728 Virtualization Management (1Ch): Supported 00:13:34.728 Doorbell Buffer Config (7Ch): Supported 00:13:34.728 Format NVM (80h): Supported LBA-Change 00:13:34.728 I/O Commands 00:13:34.728 ------------ 00:13:34.728 Flush (00h): Supported LBA-Change 00:13:34.728 Write (01h): Supported LBA-Change 00:13:34.728 Read (02h): Supported 00:13:34.728 Compare (05h): Supported 00:13:34.728 Write Zeroes (08h): Supported LBA-Change 00:13:34.728 Dataset Management (09h): Supported LBA-Change 00:13:34.728 Unknown (0Ch): Supported 00:13:34.728 Unknown (12h): Supported 00:13:34.728 Copy (19h): Supported LBA-Change 00:13:34.728 Unknown (1Dh): Supported LBA-Change 00:13:34.728 00:13:34.728 Error Log 00:13:34.728 ========= 00:13:34.728 00:13:34.728 Arbitration 00:13:34.728 =========== 00:13:34.728 Arbitration Burst: no limit 00:13:34.728 00:13:34.728 Power Management 00:13:34.728 ================ 00:13:34.728 Number of Power States: 1 00:13:34.728 Current Power State: Power State #0 00:13:34.728 Power State #0: 00:13:34.728 Max Power: 25.00 W 00:13:34.728 Non-Operational State: Operational 00:13:34.728 Entry Latency: 16 microseconds 00:13:34.728 Exit Latency: 4 microseconds 00:13:34.728 Relative Read Throughput: 0 00:13:34.728 Relative Read Latency: 0 00:13:34.728 Relative Write Throughput: 0 00:13:34.728 Relative Write Latency: 0 00:13:34.728 Idle Power: Not Reported 00:13:34.728 Active Power: Not Reported 00:13:34.728 Non-Operational Permissive Mode: Not Supported 00:13:34.728 00:13:34.728 Health Information 00:13:34.728 ================== 00:13:34.729 Critical Warnings: 00:13:34.729 Available Spare Space: OK 00:13:34.729 Temperature: OK 00:13:34.729 Device Reliability: OK 00:13:34.729 Read Only: No 00:13:34.729 Volatile Memory Backup: OK 00:13:34.729 Current Temperature: 323 Kelvin (50 Celsius) 00:13:34.729 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:34.729 Available Spare: 0% 00:13:34.729 Available Spare Threshold: 0% 00:13:34.729 Life Percentage Used: 0% 00:13:34.729 Data Units Read: 797 00:13:34.729 Data Units Written: 726 00:13:34.729 Host Read Commands: 33246 00:13:34.729 Host Write Commands: 32669 00:13:34.729 Controller Busy Time: 0 minutes 00:13:34.729 Power Cycles: 0 00:13:34.729 Power On Hours: 0 hours 00:13:34.729 Unsafe Shutdowns: 0 00:13:34.729 Unrecoverable Media Errors: 0 00:13:34.729 Lifetime Error Log Entries: 0 00:13:34.729 Warning Temperature Time: 0 minutes 00:13:34.729 Critical Temperature Time: 0 minutes 00:13:34.729 00:13:34.729 Number of Queues 00:13:34.729 ================ 00:13:34.729 Number of I/O Submission Queues: 64 00:13:34.729 Number of I/O Completion Queues: 64 00:13:34.729 00:13:34.729 ZNS Specific Controller Data 00:13:34.729 ============================ 00:13:34.729 Zone Append Size Limit: 0 00:13:34.729 00:13:34.729 00:13:34.729 Active Namespaces 00:13:34.729 ================= 00:13:34.729 Namespace ID:1 00:13:34.729 Error Recovery Timeout: Unlimited 00:13:34.729 Command Set Identifier: NVM (00h) 00:13:34.729 Deallocate: Supported 00:13:34.729 Deallocated/Unwritten Error: Supported 00:13:34.729 Deallocated Read Value: All 0x00 00:13:34.729 Deallocate in Write Zeroes: Not Supported 00:13:34.729 Deallocated Guard Field: 0xFFFF 00:13:34.729 Flush: Supported 00:13:34.729 Reservation: Not Supported 00:13:34.729 Namespace Sharing Capabilities: Multiple Controllers 00:13:34.729 Size (in LBAs): 262144 (1GiB) 00:13:34.729 Capacity (in LBAs): 262144 (1GiB) 00:13:34.729 Utilization (in LBAs): 262144 (1GiB) 00:13:34.729 Thin Provisioning: Not Supported 00:13:34.729 Per-NS Atomic Units: No 00:13:34.729 Maximum Single Source Range Length: 128 00:13:34.729 Maximum Copy Length: 128 00:13:34.729 Maximum Source Range Count: 128 00:13:34.729 NGUID/EUI64 Never Reused: No 00:13:34.729 Namespace Write Protected: No 00:13:34.729 Endurance group ID: 1 00:13:34.729 Number of LBA Formats: 8 00:13:34.729 Current LBA Format: LBA Format #04 00:13:34.729 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:34.729 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:34.729 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:34.729 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:34.729 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:34.729 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:34.729 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:34.729 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:34.729 00:13:34.729 Get Feature FDP: 00:13:34.729 ================ 00:13:34.729 Enabled: Yes 00:13:34.729 FDP configuration index: 0 00:13:34.729 00:13:34.729 FDP configurations log page 00:13:34.729 =========================== 00:13:34.729 Number of FDP configurations: 1 00:13:34.729 Version: 0 00:13:34.730 Size: 112 00:13:34.730 FDP Configuration Descriptor: 0 00:13:34.730 Descriptor Size: 96 00:13:34.730 Reclaim Group Identifier format: 2 00:13:34.730 FDP Volatile Write Cache: Not Present 00:13:34.730 FDP Configuration: Valid 00:13:34.730 Vendor Specific Size: 0 00:13:34.730 Number of Reclaim Groups: 2 00:13:34.730 Number of Recalim Unit Handles: 8 00:13:34.730 Max Placement Identifiers: 128 00:13:34.730 Number of Namespaces Suppprted: 256 00:13:34.730 Reclaim unit Nominal Size: 6000000 bytes 00:13:34.730 Estimated Reclaim Unit Time Limit: Not Reported 00:13:34.730 RUH Desc #000: RUH Type: Initially Isolated 00:13:34.730 RUH Desc #001: RUH Type: Initially Isolated 00:13:34.730 RUH Desc #002: RUH Type: Initially Isolated 00:13:34.730 RUH Desc #003: RUH Type: Initially Isolated 00:13:34.730 RUH Desc #004: RUH Type: Initially Isolated 00:13:34.730 RUH Desc #005: RUH Type: Initially Isolated 00:13:34.730 RUH Desc #006: RUH Type: Initially Isolated 00:13:34.730 RUH Desc #007: RUH Type: Initially Isolated 00:13:34.730 00:13:34.730 FDP reclaim unit handle usage log page 00:13:34.730 ====================================== 00:13:34.730 Number of Reclaim Unit Handles: 8 00:13:34.730 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:34.730 RUH Usage Desc #001: RUH Attributes: Unused 00:13:34.730 RUH Usage Desc #002: RUH Attributes: Unused 00:13:34.730 RUH Usage Desc #003: RUH Attributes: Unused 00:13:34.730 RUH Usage Desc #004: RUH Attributes: Unused 00:13:34.730 RUH Usage Desc #005: RUH Attributes: Unused 00:13:34.730 RUH Usage Desc #006: RUH Attributes: Unused 00:13:34.730 RUH Usage Desc #007: RUH Attributes: Unused 00:13:34.730 00:13:34.730 FDP statistics log page 00:13:34.730 ======================= 00:13:34.730 Host bytes with metadata written: 457351168 00:13:34.730 Media bytes with metadata written: 457416704 00:13:34.730 Media bytes erased: 0 00:13:34.730 00:13:34.730 FDP events log page 00:13:34.730 =================== 00:13:34.730 Number of FDP events: 0 00:13:34.730 00:13:34.730 NVM Specific Namespace Data 00:13:34.730 =========================== 00:13:34.730 Logical Block Storage Tag Mask: 0 00:13:34.730 Protection Information Capabilities: 00:13:34.730 16b Guard Protection Information Storage Tag Support: No 00:13:34.730 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:34.730 Storage Tag Check Read Support: No 00:13:34.730 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.730 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.730 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.730 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.730 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.730 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.730 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.730 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:34.730 00:13:34.730 real 0m1.597s 00:13:34.730 user 0m0.581s 00:13:34.730 sys 0m0.786s 00:13:34.730 18:17:27 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.730 18:17:27 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:13:34.730 ************************************ 00:13:34.730 END TEST nvme_identify 00:13:34.730 ************************************ 00:13:34.731 18:17:27 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:13:34.731 18:17:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:34.731 18:17:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.731 18:17:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.731 ************************************ 00:13:34.731 START TEST nvme_perf 00:13:34.731 ************************************ 00:13:34.731 18:17:27 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:13:34.731 18:17:27 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:13:36.108 Initializing NVMe Controllers 00:13:36.108 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:36.108 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:36.108 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:36.108 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:36.108 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:36.108 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:36.108 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:36.108 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:36.108 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:36.108 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:36.108 Initialization complete. Launching workers. 00:13:36.108 ======================================================== 00:13:36.108 Latency(us) 00:13:36.108 Device Information : IOPS MiB/s Average min max 00:13:36.108 PCIE (0000:00:10.0) NSID 1 from core 0: 14800.45 173.44 8665.98 6491.23 42187.73 00:13:36.108 PCIE (0000:00:11.0) NSID 1 from core 0: 14800.45 173.44 8650.46 6575.59 39823.57 00:13:36.108 PCIE (0000:00:13.0) NSID 1 from core 0: 14800.45 173.44 8633.22 6551.33 38404.71 00:13:36.109 PCIE (0000:00:12.0) NSID 1 from core 0: 14800.45 173.44 8614.48 6544.42 36121.00 00:13:36.109 PCIE (0000:00:12.0) NSID 2 from core 0: 14800.45 173.44 8596.02 6555.59 33870.10 00:13:36.109 PCIE (0000:00:12.0) NSID 3 from core 0: 14864.24 174.19 8540.83 6562.07 25691.77 00:13:36.109 ======================================================== 00:13:36.109 Total : 88866.47 1041.40 8616.78 6491.23 42187.73 00:13:36.109 00:13:36.109 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:36.109 ================================================================================= 00:13:36.109 1.00000% : 6954.257us 00:13:36.109 10.00000% : 7440.769us 00:13:36.109 25.00000% : 7669.715us 00:13:36.109 50.00000% : 8013.135us 00:13:36.109 75.00000% : 8471.029us 00:13:36.109 90.00000% : 10073.656us 00:13:36.109 95.00000% : 11619.046us 00:13:36.109 98.00000% : 16255.217us 00:13:36.109 99.00000% : 18430.211us 00:13:36.109 99.50000% : 34570.955us 00:13:36.109 99.90000% : 41897.251us 00:13:36.109 99.99000% : 42126.197us 00:13:36.109 99.99900% : 42355.144us 00:13:36.109 99.99990% : 42355.144us 00:13:36.109 99.99999% : 42355.144us 00:13:36.109 00:13:36.109 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:36.109 ================================================================================= 00:13:36.109 1.00000% : 6982.875us 00:13:36.109 10.00000% : 7498.005us 00:13:36.109 25.00000% : 7726.952us 00:13:36.109 50.00000% : 8013.135us 00:13:36.109 75.00000% : 8413.792us 00:13:36.109 90.00000% : 10073.656us 00:13:36.109 95.00000% : 11733.520us 00:13:36.109 98.00000% : 16140.744us 00:13:36.109 99.00000% : 18888.105us 00:13:36.109 99.50000% : 32510.435us 00:13:36.109 99.90000% : 39378.837us 00:13:36.109 99.99000% : 39836.730us 00:13:36.109 99.99900% : 39836.730us 00:13:36.109 99.99990% : 39836.730us 00:13:36.109 99.99999% : 39836.730us 00:13:36.109 00:13:36.109 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:36.109 ================================================================================= 00:13:36.109 1.00000% : 7011.493us 00:13:36.109 10.00000% : 7498.005us 00:13:36.109 25.00000% : 7726.952us 00:13:36.109 50.00000% : 8013.135us 00:13:36.109 75.00000% : 8413.792us 00:13:36.109 90.00000% : 10016.419us 00:13:36.109 95.00000% : 11676.283us 00:13:36.109 98.00000% : 16255.217us 00:13:36.109 99.00000% : 18544.685us 00:13:36.109 99.50000% : 30678.861us 00:13:36.109 99.90000% : 38005.156us 00:13:36.109 99.99000% : 38463.050us 00:13:36.109 99.99900% : 38463.050us 00:13:36.109 99.99990% : 38463.050us 00:13:36.109 99.99999% : 38463.050us 00:13:36.109 00:13:36.109 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:36.109 ================================================================================= 00:13:36.109 1.00000% : 7011.493us 00:13:36.109 10.00000% : 7498.005us 00:13:36.109 25.00000% : 7726.952us 00:13:36.109 50.00000% : 8013.135us 00:13:36.109 75.00000% : 8413.792us 00:13:36.109 90.00000% : 10016.419us 00:13:36.109 95.00000% : 11619.046us 00:13:36.109 98.00000% : 15911.797us 00:13:36.109 99.00000% : 18086.791us 00:13:36.109 99.50000% : 28503.867us 00:13:36.109 99.90000% : 35715.689us 00:13:36.109 99.99000% : 36173.583us 00:13:36.109 99.99900% : 36173.583us 00:13:36.109 99.99990% : 36173.583us 00:13:36.109 99.99999% : 36173.583us 00:13:36.109 00:13:36.109 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:36.109 ================================================================================= 00:13:36.109 1.00000% : 7011.493us 00:13:36.109 10.00000% : 7498.005us 00:13:36.109 25.00000% : 7726.952us 00:13:36.109 50.00000% : 8013.135us 00:13:36.109 75.00000% : 8413.792us 00:13:36.109 90.00000% : 10073.656us 00:13:36.109 95.00000% : 11504.573us 00:13:36.109 98.00000% : 15911.797us 00:13:36.109 99.00000% : 17514.424us 00:13:36.109 99.50000% : 26214.400us 00:13:36.109 99.90000% : 33655.169us 00:13:36.109 99.99000% : 33884.115us 00:13:36.109 99.99900% : 33884.115us 00:13:36.109 99.99990% : 33884.115us 00:13:36.109 99.99999% : 33884.115us 00:13:36.109 00:13:36.109 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:36.109 ================================================================================= 00:13:36.109 1.00000% : 7011.493us 00:13:36.109 10.00000% : 7498.005us 00:13:36.109 25.00000% : 7726.952us 00:13:36.109 50.00000% : 8013.135us 00:13:36.109 75.00000% : 8413.792us 00:13:36.109 90.00000% : 10188.129us 00:13:36.109 95.00000% : 11619.046us 00:13:36.109 98.00000% : 16140.744us 00:13:36.109 99.00000% : 17171.004us 00:13:36.109 99.50000% : 18201.265us 00:13:36.109 99.90000% : 25298.613us 00:13:36.109 99.99000% : 25756.507us 00:13:36.109 99.99900% : 25756.507us 00:13:36.109 99.99990% : 25756.507us 00:13:36.109 99.99999% : 25756.507us 00:13:36.109 00:13:36.109 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:36.109 ============================================================================== 00:13:36.109 Range in us Cumulative IO count 00:13:36.109 6467.745 - 6496.363: 0.0135% ( 2) 00:13:36.109 6496.363 - 6524.982: 0.0202% ( 1) 00:13:36.109 6524.982 - 6553.600: 0.0337% ( 2) 00:13:36.109 6553.600 - 6582.218: 0.0539% ( 3) 00:13:36.109 6582.218 - 6610.837: 0.0943% ( 6) 00:13:36.109 6610.837 - 6639.455: 0.1684% ( 11) 00:13:36.109 6639.455 - 6668.073: 0.2290% ( 9) 00:13:36.109 6668.073 - 6696.692: 0.2761% ( 7) 00:13:36.109 6696.692 - 6725.310: 0.3637% ( 13) 00:13:36.109 6725.310 - 6753.928: 0.4243% ( 9) 00:13:36.109 6753.928 - 6782.547: 0.4984% ( 11) 00:13:36.109 6782.547 - 6811.165: 0.5657% ( 10) 00:13:36.109 6811.165 - 6839.783: 0.6600% ( 14) 00:13:36.109 6839.783 - 6868.402: 0.7543% ( 14) 00:13:36.109 6868.402 - 6897.020: 0.8284% ( 11) 00:13:36.109 6897.020 - 6925.638: 0.9227% ( 14) 00:13:36.109 6925.638 - 6954.257: 1.0641% ( 21) 00:13:36.109 6954.257 - 6982.875: 1.1517% ( 13) 00:13:36.109 6982.875 - 7011.493: 1.3402% ( 28) 00:13:36.109 7011.493 - 7040.112: 1.5558% ( 32) 00:13:36.109 7040.112 - 7068.730: 1.8656% ( 46) 00:13:36.109 7068.730 - 7097.348: 2.2158% ( 52) 00:13:36.109 7097.348 - 7125.967: 2.6536% ( 65) 00:13:36.109 7125.967 - 7154.585: 3.0509% ( 59) 00:13:36.109 7154.585 - 7183.203: 3.6773% ( 93) 00:13:36.109 7183.203 - 7211.822: 4.2497% ( 85) 00:13:36.109 7211.822 - 7240.440: 4.9569% ( 105) 00:13:36.109 7240.440 - 7269.059: 5.7718% ( 121) 00:13:36.109 7269.059 - 7297.677: 6.5329% ( 113) 00:13:36.109 7297.677 - 7326.295: 7.5633% ( 153) 00:13:36.109 7326.295 - 7383.532: 9.7320% ( 322) 00:13:36.109 7383.532 - 7440.769: 12.4731% ( 407) 00:13:36.109 7440.769 - 7498.005: 15.7058% ( 480) 00:13:36.109 7498.005 - 7555.242: 19.3629% ( 543) 00:13:36.109 7555.242 - 7612.479: 22.9930% ( 539) 00:13:36.109 7612.479 - 7669.715: 26.8858% ( 578) 00:13:36.109 7669.715 - 7726.952: 30.9133% ( 598) 00:13:36.109 7726.952 - 7784.189: 34.6781% ( 559) 00:13:36.109 7784.189 - 7841.425: 38.5035% ( 568) 00:13:36.109 7841.425 - 7898.662: 42.3357% ( 569) 00:13:36.109 7898.662 - 7955.899: 46.2352% ( 579) 00:13:36.109 7955.899 - 8013.135: 50.1280% ( 578) 00:13:36.109 8013.135 - 8070.372: 54.0073% ( 576) 00:13:36.109 8070.372 - 8127.609: 57.7115% ( 550) 00:13:36.109 8127.609 - 8184.845: 61.2069% ( 519) 00:13:36.109 8184.845 - 8242.082: 64.7023% ( 519) 00:13:36.109 8242.082 - 8299.319: 67.9081% ( 476) 00:13:36.109 8299.319 - 8356.555: 70.7570% ( 423) 00:13:36.109 8356.555 - 8413.792: 73.5991% ( 422) 00:13:36.109 8413.792 - 8471.029: 75.9159% ( 344) 00:13:36.109 8471.029 - 8528.266: 77.8489% ( 287) 00:13:36.109 8528.266 - 8585.502: 79.2834% ( 213) 00:13:36.109 8585.502 - 8642.739: 80.3273% ( 155) 00:13:36.109 8642.739 - 8699.976: 81.2029% ( 130) 00:13:36.109 8699.976 - 8757.212: 81.9370% ( 109) 00:13:36.109 8757.212 - 8814.449: 82.5700% ( 94) 00:13:36.109 8814.449 - 8871.686: 83.0280% ( 68) 00:13:36.109 8871.686 - 8928.922: 83.4725% ( 66) 00:13:36.109 8928.922 - 8986.159: 83.7958% ( 48) 00:13:36.109 8986.159 - 9043.396: 84.1595% ( 54) 00:13:36.109 9043.396 - 9100.632: 84.5164% ( 53) 00:13:36.109 9100.632 - 9157.869: 84.9273% ( 61) 00:13:36.109 9157.869 - 9215.106: 85.2909% ( 54) 00:13:36.109 9215.106 - 9272.342: 85.6210% ( 49) 00:13:36.109 9272.342 - 9329.579: 85.9577% ( 50) 00:13:36.109 9329.579 - 9386.816: 86.2473% ( 43) 00:13:36.109 9386.816 - 9444.052: 86.6110% ( 54) 00:13:36.109 9444.052 - 9501.289: 86.9881% ( 56) 00:13:36.110 9501.289 - 9558.526: 87.3249% ( 50) 00:13:36.110 9558.526 - 9615.762: 87.6010% ( 41) 00:13:36.110 9615.762 - 9672.999: 87.9108% ( 46) 00:13:36.110 9672.999 - 9730.236: 88.2543% ( 51) 00:13:36.110 9730.236 - 9787.472: 88.5978% ( 51) 00:13:36.110 9787.472 - 9844.709: 88.8941% ( 44) 00:13:36.110 9844.709 - 9901.946: 89.1972% ( 45) 00:13:36.110 9901.946 - 9959.183: 89.4868% ( 43) 00:13:36.110 9959.183 - 10016.419: 89.8168% ( 49) 00:13:36.110 10016.419 - 10073.656: 90.1131% ( 44) 00:13:36.110 10073.656 - 10130.893: 90.3893% ( 41) 00:13:36.110 10130.893 - 10188.129: 90.6519% ( 39) 00:13:36.110 10188.129 - 10245.366: 90.9685% ( 47) 00:13:36.110 10245.366 - 10302.603: 91.2244% ( 38) 00:13:36.110 10302.603 - 10359.839: 91.4601% ( 35) 00:13:36.110 10359.839 - 10417.076: 91.6756% ( 32) 00:13:36.110 10417.076 - 10474.313: 91.9181% ( 36) 00:13:36.110 10474.313 - 10531.549: 92.1606% ( 36) 00:13:36.110 10531.549 - 10588.786: 92.3693% ( 31) 00:13:36.110 10588.786 - 10646.023: 92.6185% ( 37) 00:13:36.110 10646.023 - 10703.259: 92.8273% ( 31) 00:13:36.110 10703.259 - 10760.496: 93.0361% ( 31) 00:13:36.110 10760.496 - 10817.733: 93.2651% ( 34) 00:13:36.110 10817.733 - 10874.969: 93.4537% ( 28) 00:13:36.110 10874.969 - 10932.206: 93.6355% ( 27) 00:13:36.110 10932.206 - 10989.443: 93.8241% ( 28) 00:13:36.110 10989.443 - 11046.679: 93.9520% ( 19) 00:13:36.110 11046.679 - 11103.916: 94.0867% ( 20) 00:13:36.110 11103.916 - 11161.153: 94.2147% ( 19) 00:13:36.110 11161.153 - 11218.390: 94.3292% ( 17) 00:13:36.110 11218.390 - 11275.626: 94.4437% ( 17) 00:13:36.110 11275.626 - 11332.863: 94.5582% ( 17) 00:13:36.110 11332.863 - 11390.100: 94.6457% ( 13) 00:13:36.110 11390.100 - 11447.336: 94.7468% ( 15) 00:13:36.110 11447.336 - 11504.573: 94.8613% ( 17) 00:13:36.110 11504.573 - 11561.810: 94.9555% ( 14) 00:13:36.110 11561.810 - 11619.046: 95.0700% ( 17) 00:13:36.110 11619.046 - 11676.283: 95.1643% ( 14) 00:13:36.110 11676.283 - 11733.520: 95.2654% ( 15) 00:13:36.110 11733.520 - 11790.756: 95.3596% ( 14) 00:13:36.110 11790.756 - 11847.993: 95.4539% ( 14) 00:13:36.110 11847.993 - 11905.230: 95.5280% ( 11) 00:13:36.110 11905.230 - 11962.466: 95.5954% ( 10) 00:13:36.110 11962.466 - 12019.703: 95.6492% ( 8) 00:13:36.110 12019.703 - 12076.940: 95.7166% ( 10) 00:13:36.110 12076.940 - 12134.176: 95.7772% ( 9) 00:13:36.110 12134.176 - 12191.413: 95.8244% ( 7) 00:13:36.110 12191.413 - 12248.650: 95.9052% ( 12) 00:13:36.110 12248.650 - 12305.886: 95.9523% ( 7) 00:13:36.110 12305.886 - 12363.123: 96.0264% ( 11) 00:13:36.110 12363.123 - 12420.360: 96.0870% ( 9) 00:13:36.110 12420.360 - 12477.597: 96.1409% ( 8) 00:13:36.110 12477.597 - 12534.833: 96.1880% ( 7) 00:13:36.110 12534.833 - 12592.070: 96.2082% ( 3) 00:13:36.110 12592.070 - 12649.307: 96.2419% ( 5) 00:13:36.110 12649.307 - 12706.543: 96.2621% ( 3) 00:13:36.110 12706.543 - 12763.780: 96.3093% ( 7) 00:13:36.110 12763.780 - 12821.017: 96.3362% ( 4) 00:13:36.110 12821.017 - 12878.253: 96.3631% ( 4) 00:13:36.110 12878.253 - 12935.490: 96.3901% ( 4) 00:13:36.110 12935.490 - 12992.727: 96.4103% ( 3) 00:13:36.110 12992.727 - 13049.963: 96.4238% ( 2) 00:13:36.110 13049.963 - 13107.200: 96.4305% ( 1) 00:13:36.110 13107.200 - 13164.437: 96.4440% ( 2) 00:13:36.110 13164.437 - 13221.673: 96.4507% ( 1) 00:13:36.110 13221.673 - 13278.910: 96.4642% ( 2) 00:13:36.110 13278.910 - 13336.147: 96.4709% ( 1) 00:13:36.110 13336.147 - 13393.383: 96.4844% ( 2) 00:13:36.110 13393.383 - 13450.620: 96.4978% ( 2) 00:13:36.110 13450.620 - 13507.857: 96.5046% ( 1) 00:13:36.110 13507.857 - 13565.093: 96.5180% ( 2) 00:13:36.110 13565.093 - 13622.330: 96.5315% ( 2) 00:13:36.110 13622.330 - 13679.567: 96.5517% ( 3) 00:13:36.110 13679.567 - 13736.803: 96.5787% ( 4) 00:13:36.110 13736.803 - 13794.040: 96.5854% ( 1) 00:13:36.110 13794.040 - 13851.277: 96.5989% ( 2) 00:13:36.110 13851.277 - 13908.514: 96.6393% ( 6) 00:13:36.110 13908.514 - 13965.750: 96.6527% ( 2) 00:13:36.110 13965.750 - 14022.987: 96.6797% ( 4) 00:13:36.110 14022.987 - 14080.224: 96.7134% ( 5) 00:13:36.110 14080.224 - 14137.460: 96.7336% ( 3) 00:13:36.110 14137.460 - 14194.697: 96.7538% ( 3) 00:13:36.110 14194.697 - 14251.934: 96.7807% ( 4) 00:13:36.110 14251.934 - 14309.170: 96.8077% ( 4) 00:13:36.110 14309.170 - 14366.407: 96.8346% ( 4) 00:13:36.110 14366.407 - 14423.644: 96.8750% ( 6) 00:13:36.110 14423.644 - 14480.880: 96.9154% ( 6) 00:13:36.110 14480.880 - 14538.117: 96.9356% ( 3) 00:13:36.110 14538.117 - 14595.354: 96.9760% ( 6) 00:13:36.110 14595.354 - 14652.590: 97.0164% ( 6) 00:13:36.110 14652.590 - 14767.064: 97.0905% ( 11) 00:13:36.110 14767.064 - 14881.537: 97.1511% ( 9) 00:13:36.110 14881.537 - 14996.010: 97.2252% ( 11) 00:13:36.110 14996.010 - 15110.484: 97.2993% ( 11) 00:13:36.110 15110.484 - 15224.957: 97.3599% ( 9) 00:13:36.110 15224.957 - 15339.431: 97.3936% ( 5) 00:13:36.110 15339.431 - 15453.904: 97.4946% ( 15) 00:13:36.110 15453.904 - 15568.377: 97.5687% ( 11) 00:13:36.110 15568.377 - 15682.851: 97.6360% ( 10) 00:13:36.110 15682.851 - 15797.324: 97.6899% ( 8) 00:13:36.110 15797.324 - 15911.797: 97.7842% ( 14) 00:13:36.110 15911.797 - 16026.271: 97.9054% ( 18) 00:13:36.110 16026.271 - 16140.744: 97.9795% ( 11) 00:13:36.110 16140.744 - 16255.217: 98.0738% ( 14) 00:13:36.110 16255.217 - 16369.691: 98.1748% ( 15) 00:13:36.110 16369.691 - 16484.164: 98.2759% ( 15) 00:13:36.110 16484.164 - 16598.638: 98.3769% ( 15) 00:13:36.110 16598.638 - 16713.111: 98.4240% ( 7) 00:13:36.110 16713.111 - 16827.584: 98.4712% ( 7) 00:13:36.110 16827.584 - 16942.058: 98.5116% ( 6) 00:13:36.110 16942.058 - 17056.531: 98.5655% ( 8) 00:13:36.110 17056.531 - 17171.004: 98.6395% ( 11) 00:13:36.110 17171.004 - 17285.478: 98.7069% ( 10) 00:13:36.110 17285.478 - 17399.951: 98.7540% ( 7) 00:13:36.110 17399.951 - 17514.424: 98.8079% ( 8) 00:13:36.110 17514.424 - 17628.898: 98.8281% ( 3) 00:13:36.110 17628.898 - 17743.371: 98.8483% ( 3) 00:13:36.110 17743.371 - 17857.845: 98.8753% ( 4) 00:13:36.110 17857.845 - 17972.318: 98.8887% ( 2) 00:13:36.110 17972.318 - 18086.791: 98.9089% ( 3) 00:13:36.110 18086.791 - 18201.265: 98.9426% ( 5) 00:13:36.110 18201.265 - 18315.738: 98.9696% ( 4) 00:13:36.110 18315.738 - 18430.211: 99.0032% ( 5) 00:13:36.110 18430.211 - 18544.685: 99.0234% ( 3) 00:13:36.110 18544.685 - 18659.158: 99.0571% ( 5) 00:13:36.110 18659.158 - 18773.631: 99.0908% ( 5) 00:13:36.110 18773.631 - 18888.105: 99.1177% ( 4) 00:13:36.110 18888.105 - 19002.578: 99.1379% ( 3) 00:13:36.110 32739.382 - 32968.328: 99.1783% ( 6) 00:13:36.110 32968.328 - 33197.275: 99.2255% ( 7) 00:13:36.110 33197.275 - 33426.222: 99.2726% ( 7) 00:13:36.110 33426.222 - 33655.169: 99.3198% ( 7) 00:13:36.110 33655.169 - 33884.115: 99.3669% ( 7) 00:13:36.110 33884.115 - 34113.062: 99.4141% ( 7) 00:13:36.110 34113.062 - 34342.009: 99.4612% ( 7) 00:13:36.110 34342.009 - 34570.955: 99.5151% ( 8) 00:13:36.110 34570.955 - 34799.902: 99.5622% ( 7) 00:13:36.110 34799.902 - 35028.849: 99.5690% ( 1) 00:13:36.110 39836.730 - 40065.677: 99.5824% ( 2) 00:13:36.110 40065.677 - 40294.624: 99.6228% ( 6) 00:13:36.110 40294.624 - 40523.570: 99.6700% ( 7) 00:13:36.110 40523.570 - 40752.517: 99.7171% ( 7) 00:13:36.110 40752.517 - 40981.464: 99.7575% ( 6) 00:13:36.110 40981.464 - 41210.410: 99.8047% ( 7) 00:13:36.110 41210.410 - 41439.357: 99.8518% ( 7) 00:13:36.110 41439.357 - 41668.304: 99.8990% ( 7) 00:13:36.110 41668.304 - 41897.251: 99.9461% ( 7) 00:13:36.110 41897.251 - 42126.197: 99.9933% ( 7) 00:13:36.110 42126.197 - 42355.144: 100.0000% ( 1) 00:13:36.110 00:13:36.110 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:36.110 ============================================================================== 00:13:36.110 Range in us Cumulative IO count 00:13:36.110 6553.600 - 6582.218: 0.0067% ( 1) 00:13:36.110 6582.218 - 6610.837: 0.0202% ( 2) 00:13:36.110 6610.837 - 6639.455: 0.0337% ( 2) 00:13:36.110 6639.455 - 6668.073: 0.0606% ( 4) 00:13:36.110 6668.073 - 6696.692: 0.1212% ( 9) 00:13:36.110 6696.692 - 6725.310: 0.1953% ( 11) 00:13:36.110 6725.310 - 6753.928: 0.2627% ( 10) 00:13:36.110 6753.928 - 6782.547: 0.3502% ( 13) 00:13:36.110 6782.547 - 6811.165: 0.4310% ( 12) 00:13:36.110 6811.165 - 6839.783: 0.5119% ( 12) 00:13:36.110 6839.783 - 6868.402: 0.5859% ( 11) 00:13:36.110 6868.402 - 6897.020: 0.6668% ( 12) 00:13:36.110 6897.020 - 6925.638: 0.7947% ( 19) 00:13:36.110 6925.638 - 6954.257: 0.8890% ( 14) 00:13:36.110 6954.257 - 6982.875: 1.0035% ( 17) 00:13:36.110 6982.875 - 7011.493: 1.1180% ( 17) 00:13:36.110 7011.493 - 7040.112: 1.2392% ( 18) 00:13:36.110 7040.112 - 7068.730: 1.4009% ( 24) 00:13:36.110 7068.730 - 7097.348: 1.5692% ( 25) 00:13:36.110 7097.348 - 7125.967: 1.8454% ( 41) 00:13:36.110 7125.967 - 7154.585: 2.2293% ( 57) 00:13:36.110 7154.585 - 7183.203: 2.6266% ( 59) 00:13:36.110 7183.203 - 7211.822: 3.0307% ( 60) 00:13:36.110 7211.822 - 7240.440: 3.6301% ( 89) 00:13:36.110 7240.440 - 7269.059: 4.2026% ( 85) 00:13:36.110 7269.059 - 7297.677: 4.8761% ( 100) 00:13:36.110 7297.677 - 7326.295: 5.5967% ( 107) 00:13:36.110 7326.295 - 7383.532: 7.3411% ( 259) 00:13:36.110 7383.532 - 7440.769: 9.5838% ( 333) 00:13:36.110 7440.769 - 7498.005: 12.3720% ( 414) 00:13:36.111 7498.005 - 7555.242: 15.7934% ( 508) 00:13:36.111 7555.242 - 7612.479: 19.9017% ( 610) 00:13:36.111 7612.479 - 7669.715: 24.0975% ( 623) 00:13:36.111 7669.715 - 7726.952: 28.5224% ( 657) 00:13:36.111 7726.952 - 7784.189: 33.0348% ( 670) 00:13:36.111 7784.189 - 7841.425: 37.7627% ( 702) 00:13:36.111 7841.425 - 7898.662: 42.4300% ( 693) 00:13:36.111 7898.662 - 7955.899: 47.0366% ( 684) 00:13:36.111 7955.899 - 8013.135: 51.6568% ( 686) 00:13:36.111 8013.135 - 8070.372: 55.9671% ( 640) 00:13:36.111 8070.372 - 8127.609: 60.1360% ( 619) 00:13:36.111 8127.609 - 8184.845: 63.9615% ( 568) 00:13:36.111 8184.845 - 8242.082: 67.6724% ( 551) 00:13:36.111 8242.082 - 8299.319: 71.0938% ( 508) 00:13:36.111 8299.319 - 8356.555: 73.9965% ( 431) 00:13:36.111 8356.555 - 8413.792: 76.4278% ( 361) 00:13:36.111 8413.792 - 8471.029: 78.2732% ( 274) 00:13:36.111 8471.029 - 8528.266: 79.5528% ( 190) 00:13:36.111 8528.266 - 8585.502: 80.4688% ( 136) 00:13:36.111 8585.502 - 8642.739: 81.2298% ( 113) 00:13:36.111 8642.739 - 8699.976: 81.8966% ( 99) 00:13:36.111 8699.976 - 8757.212: 82.5162% ( 92) 00:13:36.111 8757.212 - 8814.449: 82.9876% ( 70) 00:13:36.111 8814.449 - 8871.686: 83.3648% ( 56) 00:13:36.111 8871.686 - 8928.922: 83.7487% ( 57) 00:13:36.111 8928.922 - 8986.159: 84.0921% ( 51) 00:13:36.111 8986.159 - 9043.396: 84.4356% ( 51) 00:13:36.111 9043.396 - 9100.632: 84.7858% ( 52) 00:13:36.111 9100.632 - 9157.869: 85.1293% ( 51) 00:13:36.111 9157.869 - 9215.106: 85.4997% ( 55) 00:13:36.111 9215.106 - 9272.342: 85.8432% ( 51) 00:13:36.111 9272.342 - 9329.579: 86.1800% ( 50) 00:13:36.111 9329.579 - 9386.816: 86.5167% ( 50) 00:13:36.111 9386.816 - 9444.052: 86.8400% ( 48) 00:13:36.111 9444.052 - 9501.289: 87.1700% ( 49) 00:13:36.111 9501.289 - 9558.526: 87.5000% ( 49) 00:13:36.111 9558.526 - 9615.762: 87.8165% ( 47) 00:13:36.111 9615.762 - 9672.999: 88.1398% ( 48) 00:13:36.111 9672.999 - 9730.236: 88.4698% ( 49) 00:13:36.111 9730.236 - 9787.472: 88.7729% ( 45) 00:13:36.111 9787.472 - 9844.709: 89.0558% ( 42) 00:13:36.111 9844.709 - 9901.946: 89.3050% ( 37) 00:13:36.111 9901.946 - 9959.183: 89.5744% ( 40) 00:13:36.111 9959.183 - 10016.419: 89.8707% ( 44) 00:13:36.111 10016.419 - 10073.656: 90.1738% ( 45) 00:13:36.111 10073.656 - 10130.893: 90.4903% ( 47) 00:13:36.111 10130.893 - 10188.129: 90.8001% ( 46) 00:13:36.111 10188.129 - 10245.366: 91.0830% ( 42) 00:13:36.111 10245.366 - 10302.603: 91.3456% ( 39) 00:13:36.111 10302.603 - 10359.839: 91.5948% ( 37) 00:13:36.111 10359.839 - 10417.076: 91.8305% ( 35) 00:13:36.111 10417.076 - 10474.313: 92.0999% ( 40) 00:13:36.111 10474.313 - 10531.549: 92.3963% ( 44) 00:13:36.111 10531.549 - 10588.786: 92.6320% ( 35) 00:13:36.111 10588.786 - 10646.023: 92.8812% ( 37) 00:13:36.111 10646.023 - 10703.259: 93.1304% ( 37) 00:13:36.111 10703.259 - 10760.496: 93.3122% ( 27) 00:13:36.111 10760.496 - 10817.733: 93.5075% ( 29) 00:13:36.111 10817.733 - 10874.969: 93.6961% ( 28) 00:13:36.111 10874.969 - 10932.206: 93.8443% ( 22) 00:13:36.111 10932.206 - 10989.443: 93.9655% ( 18) 00:13:36.111 10989.443 - 11046.679: 94.0800% ( 17) 00:13:36.111 11046.679 - 11103.916: 94.1676% ( 13) 00:13:36.111 11103.916 - 11161.153: 94.2753% ( 16) 00:13:36.111 11161.153 - 11218.390: 94.3629% ( 13) 00:13:36.111 11218.390 - 11275.626: 94.4572% ( 14) 00:13:36.111 11275.626 - 11332.863: 94.5447% ( 13) 00:13:36.111 11332.863 - 11390.100: 94.6323% ( 13) 00:13:36.111 11390.100 - 11447.336: 94.7064% ( 11) 00:13:36.111 11447.336 - 11504.573: 94.7737% ( 10) 00:13:36.111 11504.573 - 11561.810: 94.8411% ( 10) 00:13:36.111 11561.810 - 11619.046: 94.9151% ( 11) 00:13:36.111 11619.046 - 11676.283: 94.9825% ( 10) 00:13:36.111 11676.283 - 11733.520: 95.0296% ( 7) 00:13:36.111 11733.520 - 11790.756: 95.0700% ( 6) 00:13:36.111 11790.756 - 11847.993: 95.0970% ( 4) 00:13:36.111 11847.993 - 11905.230: 95.1239% ( 4) 00:13:36.111 11905.230 - 11962.466: 95.1711% ( 7) 00:13:36.111 11962.466 - 12019.703: 95.2519% ( 12) 00:13:36.111 12019.703 - 12076.940: 95.3058% ( 8) 00:13:36.111 12076.940 - 12134.176: 95.3529% ( 7) 00:13:36.111 12134.176 - 12191.413: 95.4203% ( 10) 00:13:36.111 12191.413 - 12248.650: 95.4809% ( 9) 00:13:36.111 12248.650 - 12305.886: 95.5213% ( 6) 00:13:36.111 12305.886 - 12363.123: 95.5550% ( 5) 00:13:36.111 12363.123 - 12420.360: 95.5819% ( 4) 00:13:36.111 12420.360 - 12477.597: 95.6223% ( 6) 00:13:36.111 12477.597 - 12534.833: 95.6627% ( 6) 00:13:36.111 12534.833 - 12592.070: 95.7031% ( 6) 00:13:36.111 12592.070 - 12649.307: 95.7368% ( 5) 00:13:36.111 12649.307 - 12706.543: 95.7772% ( 6) 00:13:36.111 12706.543 - 12763.780: 95.8109% ( 5) 00:13:36.111 12763.780 - 12821.017: 95.8580% ( 7) 00:13:36.111 12821.017 - 12878.253: 95.8984% ( 6) 00:13:36.111 12878.253 - 12935.490: 95.9523% ( 8) 00:13:36.111 12935.490 - 12992.727: 95.9658% ( 2) 00:13:36.111 12992.727 - 13049.963: 95.9860% ( 3) 00:13:36.111 13049.963 - 13107.200: 96.0129% ( 4) 00:13:36.111 13107.200 - 13164.437: 96.0399% ( 4) 00:13:36.111 13164.437 - 13221.673: 96.0668% ( 4) 00:13:36.111 13221.673 - 13278.910: 96.1005% ( 5) 00:13:36.111 13278.910 - 13336.147: 96.1409% ( 6) 00:13:36.111 13336.147 - 13393.383: 96.1678% ( 4) 00:13:36.111 13393.383 - 13450.620: 96.2015% ( 5) 00:13:36.111 13450.620 - 13507.857: 96.2419% ( 6) 00:13:36.111 13507.857 - 13565.093: 96.3093% ( 10) 00:13:36.111 13565.093 - 13622.330: 96.3631% ( 8) 00:13:36.111 13622.330 - 13679.567: 96.4305% ( 10) 00:13:36.111 13679.567 - 13736.803: 96.4978% ( 10) 00:13:36.111 13736.803 - 13794.040: 96.5652% ( 10) 00:13:36.111 13794.040 - 13851.277: 96.6258% ( 9) 00:13:36.111 13851.277 - 13908.514: 96.6797% ( 8) 00:13:36.111 13908.514 - 13965.750: 96.7268% ( 7) 00:13:36.111 13965.750 - 14022.987: 96.7672% ( 6) 00:13:36.111 14022.987 - 14080.224: 96.8211% ( 8) 00:13:36.111 14080.224 - 14137.460: 96.8750% ( 8) 00:13:36.111 14137.460 - 14194.697: 96.9289% ( 8) 00:13:36.111 14194.697 - 14251.934: 96.9828% ( 8) 00:13:36.111 14251.934 - 14309.170: 97.0434% ( 9) 00:13:36.111 14309.170 - 14366.407: 97.0973% ( 8) 00:13:36.111 14366.407 - 14423.644: 97.1511% ( 8) 00:13:36.111 14423.644 - 14480.880: 97.2117% ( 9) 00:13:36.111 14480.880 - 14538.117: 97.2522% ( 6) 00:13:36.111 14538.117 - 14595.354: 97.3195% ( 10) 00:13:36.111 14595.354 - 14652.590: 97.3599% ( 6) 00:13:36.111 14652.590 - 14767.064: 97.4071% ( 7) 00:13:36.111 14767.064 - 14881.537: 97.4475% ( 6) 00:13:36.111 14881.537 - 14996.010: 97.4946% ( 7) 00:13:36.111 14996.010 - 15110.484: 97.5350% ( 6) 00:13:36.111 15110.484 - 15224.957: 97.5754% ( 6) 00:13:36.111 15224.957 - 15339.431: 97.6158% ( 6) 00:13:36.111 15339.431 - 15453.904: 97.6562% ( 6) 00:13:36.111 15453.904 - 15568.377: 97.7303% ( 11) 00:13:36.111 15568.377 - 15682.851: 97.8112% ( 12) 00:13:36.111 15682.851 - 15797.324: 97.8650% ( 8) 00:13:36.111 15797.324 - 15911.797: 97.9324% ( 10) 00:13:36.111 15911.797 - 16026.271: 97.9930% ( 9) 00:13:36.111 16026.271 - 16140.744: 98.0603% ( 10) 00:13:36.111 16140.744 - 16255.217: 98.1210% ( 9) 00:13:36.111 16255.217 - 16369.691: 98.1816% ( 9) 00:13:36.111 16369.691 - 16484.164: 98.2287% ( 7) 00:13:36.111 16484.164 - 16598.638: 98.3230% ( 14) 00:13:36.111 16598.638 - 16713.111: 98.3432% ( 3) 00:13:36.111 16713.111 - 16827.584: 98.3769% ( 5) 00:13:36.111 16827.584 - 16942.058: 98.4240% ( 7) 00:13:36.111 16942.058 - 17056.531: 98.4577% ( 5) 00:13:36.111 17056.531 - 17171.004: 98.4846% ( 4) 00:13:36.111 17171.004 - 17285.478: 98.5183% ( 5) 00:13:36.111 17285.478 - 17399.951: 98.5587% ( 6) 00:13:36.111 17399.951 - 17514.424: 98.5924% ( 5) 00:13:36.111 17514.424 - 17628.898: 98.6328% ( 6) 00:13:36.111 17628.898 - 17743.371: 98.6665% ( 5) 00:13:36.111 17743.371 - 17857.845: 98.7002% ( 5) 00:13:36.111 17857.845 - 17972.318: 98.7204% ( 3) 00:13:36.111 17972.318 - 18086.791: 98.7608% ( 6) 00:13:36.111 18086.791 - 18201.265: 98.7877% ( 4) 00:13:36.111 18201.265 - 18315.738: 98.8214% ( 5) 00:13:36.111 18315.738 - 18430.211: 98.8618% ( 6) 00:13:36.111 18430.211 - 18544.685: 98.9089% ( 7) 00:13:36.111 18544.685 - 18659.158: 98.9426% ( 5) 00:13:36.111 18659.158 - 18773.631: 98.9696% ( 4) 00:13:36.111 18773.631 - 18888.105: 99.0032% ( 5) 00:13:36.111 18888.105 - 19002.578: 99.0436% ( 6) 00:13:36.111 19002.578 - 19117.052: 99.0773% ( 5) 00:13:36.111 19117.052 - 19231.525: 99.1110% ( 5) 00:13:36.111 19231.525 - 19345.998: 99.1379% ( 4) 00:13:36.111 30678.861 - 30907.808: 99.1783% ( 6) 00:13:36.111 30907.808 - 31136.755: 99.2255% ( 7) 00:13:36.111 31136.755 - 31365.701: 99.2794% ( 8) 00:13:36.111 31365.701 - 31594.648: 99.3332% ( 8) 00:13:36.111 31594.648 - 31823.595: 99.3737% ( 6) 00:13:36.111 31823.595 - 32052.541: 99.4208% ( 7) 00:13:36.111 32052.541 - 32281.488: 99.4679% ( 7) 00:13:36.111 32281.488 - 32510.435: 99.5286% ( 9) 00:13:36.111 32510.435 - 32739.382: 99.5690% ( 6) 00:13:36.111 37776.210 - 38005.156: 99.5959% ( 4) 00:13:36.111 38005.156 - 38234.103: 99.6430% ( 7) 00:13:36.111 38234.103 - 38463.050: 99.6902% ( 7) 00:13:36.111 38463.050 - 38691.997: 99.7441% ( 8) 00:13:36.111 38691.997 - 38920.943: 99.7912% ( 7) 00:13:36.111 38920.943 - 39149.890: 99.8518% ( 9) 00:13:36.111 39149.890 - 39378.837: 99.9057% ( 8) 00:13:36.111 39378.837 - 39607.783: 99.9529% ( 7) 00:13:36.111 39607.783 - 39836.730: 100.0000% ( 7) 00:13:36.111 00:13:36.111 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:36.111 ============================================================================== 00:13:36.111 Range in us Cumulative IO count 00:13:36.111 6524.982 - 6553.600: 0.0135% ( 2) 00:13:36.112 6553.600 - 6582.218: 0.0269% ( 2) 00:13:36.112 6582.218 - 6610.837: 0.0539% ( 4) 00:13:36.112 6610.837 - 6639.455: 0.0876% ( 5) 00:13:36.112 6639.455 - 6668.073: 0.1078% ( 3) 00:13:36.112 6668.073 - 6696.692: 0.1482% ( 6) 00:13:36.112 6696.692 - 6725.310: 0.2088% ( 9) 00:13:36.112 6725.310 - 6753.928: 0.2627% ( 8) 00:13:36.112 6753.928 - 6782.547: 0.3367% ( 11) 00:13:36.112 6782.547 - 6811.165: 0.4041% ( 10) 00:13:36.112 6811.165 - 6839.783: 0.4714% ( 10) 00:13:36.112 6839.783 - 6868.402: 0.5321% ( 9) 00:13:36.112 6868.402 - 6897.020: 0.6061% ( 11) 00:13:36.112 6897.020 - 6925.638: 0.7139% ( 16) 00:13:36.112 6925.638 - 6954.257: 0.8284% ( 17) 00:13:36.112 6954.257 - 6982.875: 0.9294% ( 15) 00:13:36.112 6982.875 - 7011.493: 1.0506% ( 18) 00:13:36.112 7011.493 - 7040.112: 1.2055% ( 23) 00:13:36.112 7040.112 - 7068.730: 1.3807% ( 26) 00:13:36.112 7068.730 - 7097.348: 1.5894% ( 31) 00:13:36.112 7097.348 - 7125.967: 1.8386% ( 37) 00:13:36.112 7125.967 - 7154.585: 2.1080% ( 40) 00:13:36.112 7154.585 - 7183.203: 2.5054% ( 59) 00:13:36.112 7183.203 - 7211.822: 2.9701% ( 69) 00:13:36.112 7211.822 - 7240.440: 3.4887% ( 77) 00:13:36.112 7240.440 - 7269.059: 4.0881% ( 89) 00:13:36.112 7269.059 - 7297.677: 4.8155% ( 108) 00:13:36.112 7297.677 - 7326.295: 5.5765% ( 113) 00:13:36.112 7326.295 - 7383.532: 7.2535% ( 249) 00:13:36.112 7383.532 - 7440.769: 9.4289% ( 323) 00:13:36.112 7440.769 - 7498.005: 12.2912% ( 425) 00:13:36.112 7498.005 - 7555.242: 15.7664% ( 516) 00:13:36.112 7555.242 - 7612.479: 19.6659% ( 579) 00:13:36.112 7612.479 - 7669.715: 24.0167% ( 646) 00:13:36.112 7669.715 - 7726.952: 28.5022% ( 666) 00:13:36.112 7726.952 - 7784.189: 33.0415% ( 674) 00:13:36.112 7784.189 - 7841.425: 37.6886% ( 690) 00:13:36.112 7841.425 - 7898.662: 42.4434% ( 706) 00:13:36.112 7898.662 - 7955.899: 47.0299% ( 681) 00:13:36.112 7955.899 - 8013.135: 51.6366% ( 684) 00:13:36.112 8013.135 - 8070.372: 56.0816% ( 660) 00:13:36.112 8070.372 - 8127.609: 60.0552% ( 590) 00:13:36.112 8127.609 - 8184.845: 63.9345% ( 576) 00:13:36.112 8184.845 - 8242.082: 67.6320% ( 549) 00:13:36.112 8242.082 - 8299.319: 71.0870% ( 513) 00:13:36.112 8299.319 - 8356.555: 74.1783% ( 459) 00:13:36.112 8356.555 - 8413.792: 76.5625% ( 354) 00:13:36.112 8413.792 - 8471.029: 78.4213% ( 276) 00:13:36.112 8471.029 - 8528.266: 79.6942% ( 189) 00:13:36.112 8528.266 - 8585.502: 80.6506% ( 142) 00:13:36.112 8585.502 - 8642.739: 81.3241% ( 100) 00:13:36.112 8642.739 - 8699.976: 81.9370% ( 91) 00:13:36.112 8699.976 - 8757.212: 82.4353% ( 74) 00:13:36.112 8757.212 - 8814.449: 82.8529% ( 62) 00:13:36.112 8814.449 - 8871.686: 83.2301% ( 56) 00:13:36.112 8871.686 - 8928.922: 83.6880% ( 68) 00:13:36.112 8928.922 - 8986.159: 84.1393% ( 67) 00:13:36.112 8986.159 - 9043.396: 84.5838% ( 66) 00:13:36.112 9043.396 - 9100.632: 84.9475% ( 54) 00:13:36.112 9100.632 - 9157.869: 85.2640% ( 47) 00:13:36.112 9157.869 - 9215.106: 85.6075% ( 51) 00:13:36.112 9215.106 - 9272.342: 85.9779% ( 55) 00:13:36.112 9272.342 - 9329.579: 86.3147% ( 50) 00:13:36.112 9329.579 - 9386.816: 86.6783% ( 54) 00:13:36.112 9386.816 - 9444.052: 87.0286% ( 52) 00:13:36.112 9444.052 - 9501.289: 87.3518% ( 48) 00:13:36.112 9501.289 - 9558.526: 87.7088% ( 53) 00:13:36.112 9558.526 - 9615.762: 88.0523% ( 51) 00:13:36.112 9615.762 - 9672.999: 88.3823% ( 49) 00:13:36.112 9672.999 - 9730.236: 88.7055% ( 48) 00:13:36.112 9730.236 - 9787.472: 89.0423% ( 50) 00:13:36.112 9787.472 - 9844.709: 89.3386% ( 44) 00:13:36.112 9844.709 - 9901.946: 89.6484% ( 46) 00:13:36.112 9901.946 - 9959.183: 89.9380% ( 43) 00:13:36.112 9959.183 - 10016.419: 90.2007% ( 39) 00:13:36.112 10016.419 - 10073.656: 90.4903% ( 43) 00:13:36.112 10073.656 - 10130.893: 90.7732% ( 42) 00:13:36.112 10130.893 - 10188.129: 90.9954% ( 33) 00:13:36.112 10188.129 - 10245.366: 91.2985% ( 45) 00:13:36.112 10245.366 - 10302.603: 91.5814% ( 42) 00:13:36.112 10302.603 - 10359.839: 91.8036% ( 33) 00:13:36.112 10359.839 - 10417.076: 92.0730% ( 40) 00:13:36.112 10417.076 - 10474.313: 92.3693% ( 44) 00:13:36.112 10474.313 - 10531.549: 92.5983% ( 34) 00:13:36.112 10531.549 - 10588.786: 92.8408% ( 36) 00:13:36.112 10588.786 - 10646.023: 93.0563% ( 32) 00:13:36.112 10646.023 - 10703.259: 93.3190% ( 39) 00:13:36.112 10703.259 - 10760.496: 93.5277% ( 31) 00:13:36.112 10760.496 - 10817.733: 93.7163% ( 28) 00:13:36.112 10817.733 - 10874.969: 93.8645% ( 22) 00:13:36.112 10874.969 - 10932.206: 94.0127% ( 22) 00:13:36.112 10932.206 - 10989.443: 94.1676% ( 23) 00:13:36.112 10989.443 - 11046.679: 94.3090% ( 21) 00:13:36.112 11046.679 - 11103.916: 94.4235% ( 17) 00:13:36.112 11103.916 - 11161.153: 94.5380% ( 17) 00:13:36.112 11161.153 - 11218.390: 94.6121% ( 11) 00:13:36.112 11218.390 - 11275.626: 94.6996% ( 13) 00:13:36.112 11275.626 - 11332.863: 94.7468% ( 7) 00:13:36.112 11332.863 - 11390.100: 94.7939% ( 7) 00:13:36.112 11390.100 - 11447.336: 94.8411% ( 7) 00:13:36.112 11447.336 - 11504.573: 94.8815% ( 6) 00:13:36.112 11504.573 - 11561.810: 94.9421% ( 9) 00:13:36.112 11561.810 - 11619.046: 94.9825% ( 6) 00:13:36.112 11619.046 - 11676.283: 95.0296% ( 7) 00:13:36.112 11676.283 - 11733.520: 95.0700% ( 6) 00:13:36.112 11733.520 - 11790.756: 95.0970% ( 4) 00:13:36.112 11790.756 - 11847.993: 95.1239% ( 4) 00:13:36.112 11847.993 - 11905.230: 95.1509% ( 4) 00:13:36.112 11905.230 - 11962.466: 95.1845% ( 5) 00:13:36.112 11962.466 - 12019.703: 95.2047% ( 3) 00:13:36.112 12019.703 - 12076.940: 95.2182% ( 2) 00:13:36.112 12076.940 - 12134.176: 95.2317% ( 2) 00:13:36.112 12134.176 - 12191.413: 95.2384% ( 1) 00:13:36.112 12191.413 - 12248.650: 95.2586% ( 3) 00:13:36.112 12248.650 - 12305.886: 95.2856% ( 4) 00:13:36.112 12305.886 - 12363.123: 95.3058% ( 3) 00:13:36.112 12363.123 - 12420.360: 95.3327% ( 4) 00:13:36.112 12420.360 - 12477.597: 95.3731% ( 6) 00:13:36.112 12477.597 - 12534.833: 95.4068% ( 5) 00:13:36.112 12534.833 - 12592.070: 95.4337% ( 4) 00:13:36.112 12592.070 - 12649.307: 95.4674% ( 5) 00:13:36.112 12649.307 - 12706.543: 95.5011% ( 5) 00:13:36.112 12706.543 - 12763.780: 95.5348% ( 5) 00:13:36.112 12763.780 - 12821.017: 95.5684% ( 5) 00:13:36.112 12821.017 - 12878.253: 95.6223% ( 8) 00:13:36.112 12878.253 - 12935.490: 95.6695% ( 7) 00:13:36.112 12935.490 - 12992.727: 95.7099% ( 6) 00:13:36.112 12992.727 - 13049.963: 95.7503% ( 6) 00:13:36.112 13049.963 - 13107.200: 95.7974% ( 7) 00:13:36.112 13107.200 - 13164.437: 95.8378% ( 6) 00:13:36.112 13164.437 - 13221.673: 95.8917% ( 8) 00:13:36.112 13221.673 - 13278.910: 95.9523% ( 9) 00:13:36.112 13278.910 - 13336.147: 96.0062% ( 8) 00:13:36.112 13336.147 - 13393.383: 96.0466% ( 6) 00:13:36.112 13393.383 - 13450.620: 96.0803% ( 5) 00:13:36.112 13450.620 - 13507.857: 96.1274% ( 7) 00:13:36.112 13507.857 - 13565.093: 96.1880% ( 9) 00:13:36.112 13565.093 - 13622.330: 96.2554% ( 10) 00:13:36.112 13622.330 - 13679.567: 96.3295% ( 11) 00:13:36.112 13679.567 - 13736.803: 96.3968% ( 10) 00:13:36.112 13736.803 - 13794.040: 96.4776% ( 12) 00:13:36.112 13794.040 - 13851.277: 96.5383% ( 9) 00:13:36.112 13851.277 - 13908.514: 96.6123% ( 11) 00:13:36.112 13908.514 - 13965.750: 96.6730% ( 9) 00:13:36.112 13965.750 - 14022.987: 96.7470% ( 11) 00:13:36.112 14022.987 - 14080.224: 96.8279% ( 12) 00:13:36.112 14080.224 - 14137.460: 96.9019% ( 11) 00:13:36.112 14137.460 - 14194.697: 96.9693% ( 10) 00:13:36.112 14194.697 - 14251.934: 97.0299% ( 9) 00:13:36.112 14251.934 - 14309.170: 97.1107% ( 12) 00:13:36.112 14309.170 - 14366.407: 97.1781% ( 10) 00:13:36.112 14366.407 - 14423.644: 97.2387% ( 9) 00:13:36.112 14423.644 - 14480.880: 97.2993% ( 9) 00:13:36.112 14480.880 - 14538.117: 97.3262% ( 4) 00:13:36.112 14538.117 - 14595.354: 97.3666% ( 6) 00:13:36.112 14595.354 - 14652.590: 97.4071% ( 6) 00:13:36.112 14652.590 - 14767.064: 97.4811% ( 11) 00:13:36.112 14767.064 - 14881.537: 97.5418% ( 9) 00:13:36.112 14881.537 - 14996.010: 97.6091% ( 10) 00:13:36.112 14996.010 - 15110.484: 97.6765% ( 10) 00:13:36.112 15110.484 - 15224.957: 97.7101% ( 5) 00:13:36.112 15224.957 - 15339.431: 97.7303% ( 3) 00:13:36.112 15339.431 - 15453.904: 97.7505% ( 3) 00:13:36.112 15453.904 - 15568.377: 97.7707% ( 3) 00:13:36.112 15568.377 - 15682.851: 97.7909% ( 3) 00:13:36.112 15682.851 - 15797.324: 97.8112% ( 3) 00:13:36.112 15797.324 - 15911.797: 97.8718% ( 9) 00:13:36.112 15911.797 - 16026.271: 97.9256% ( 8) 00:13:36.112 16026.271 - 16140.744: 97.9728% ( 7) 00:13:36.112 16140.744 - 16255.217: 98.0132% ( 6) 00:13:36.112 16255.217 - 16369.691: 98.0603% ( 7) 00:13:36.112 16369.691 - 16484.164: 98.1008% ( 6) 00:13:36.112 16484.164 - 16598.638: 98.1412% ( 6) 00:13:36.112 16598.638 - 16713.111: 98.2085% ( 10) 00:13:36.112 16713.111 - 16827.584: 98.2893% ( 12) 00:13:36.112 16827.584 - 16942.058: 98.3769% ( 13) 00:13:36.112 16942.058 - 17056.531: 98.4375% ( 9) 00:13:36.112 17056.531 - 17171.004: 98.4846% ( 7) 00:13:36.112 17171.004 - 17285.478: 98.5453% ( 9) 00:13:36.112 17285.478 - 17399.951: 98.6059% ( 9) 00:13:36.112 17399.951 - 17514.424: 98.6598% ( 8) 00:13:36.112 17514.424 - 17628.898: 98.7204% ( 9) 00:13:36.112 17628.898 - 17743.371: 98.7742% ( 8) 00:13:36.112 17743.371 - 17857.845: 98.8349% ( 9) 00:13:36.112 17857.845 - 17972.318: 98.8887% ( 8) 00:13:36.112 17972.318 - 18086.791: 98.9224% ( 5) 00:13:36.112 18086.791 - 18201.265: 98.9426% ( 3) 00:13:36.112 18201.265 - 18315.738: 98.9628% ( 3) 00:13:36.112 18315.738 - 18430.211: 98.9898% ( 4) 00:13:36.112 18430.211 - 18544.685: 99.0100% ( 3) 00:13:36.112 18544.685 - 18659.158: 99.0234% ( 2) 00:13:36.112 18659.158 - 18773.631: 99.0436% ( 3) 00:13:36.112 18773.631 - 18888.105: 99.0638% ( 3) 00:13:36.112 18888.105 - 19002.578: 99.0841% ( 3) 00:13:36.112 19002.578 - 19117.052: 99.1043% ( 3) 00:13:36.112 19117.052 - 19231.525: 99.1245% ( 3) 00:13:36.112 19231.525 - 19345.998: 99.1379% ( 2) 00:13:36.112 28961.761 - 29076.234: 99.1581% ( 3) 00:13:36.112 29076.234 - 29190.707: 99.1851% ( 4) 00:13:36.112 29190.707 - 29305.181: 99.2053% ( 3) 00:13:36.112 29305.181 - 29534.128: 99.2524% ( 7) 00:13:36.112 29534.128 - 29763.074: 99.3063% ( 8) 00:13:36.113 29763.074 - 29992.021: 99.3534% ( 7) 00:13:36.113 29992.021 - 30220.968: 99.4006% ( 7) 00:13:36.113 30220.968 - 30449.914: 99.4477% ( 7) 00:13:36.113 30449.914 - 30678.861: 99.5016% ( 8) 00:13:36.113 30678.861 - 30907.808: 99.5488% ( 7) 00:13:36.113 30907.808 - 31136.755: 99.5690% ( 3) 00:13:36.113 36173.583 - 36402.529: 99.5824% ( 2) 00:13:36.113 36402.529 - 36631.476: 99.6228% ( 6) 00:13:36.113 36631.476 - 36860.423: 99.6700% ( 7) 00:13:36.113 36860.423 - 37089.369: 99.7171% ( 7) 00:13:36.113 37089.369 - 37318.316: 99.7710% ( 8) 00:13:36.113 37318.316 - 37547.263: 99.8182% ( 7) 00:13:36.113 37547.263 - 37776.210: 99.8586% ( 6) 00:13:36.113 37776.210 - 38005.156: 99.9057% ( 7) 00:13:36.113 38005.156 - 38234.103: 99.9529% ( 7) 00:13:36.113 38234.103 - 38463.050: 100.0000% ( 7) 00:13:36.113 00:13:36.113 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:36.113 ============================================================================== 00:13:36.113 Range in us Cumulative IO count 00:13:36.113 6524.982 - 6553.600: 0.0067% ( 1) 00:13:36.113 6553.600 - 6582.218: 0.0404% ( 5) 00:13:36.113 6582.218 - 6610.837: 0.0673% ( 4) 00:13:36.113 6610.837 - 6639.455: 0.0876% ( 3) 00:13:36.113 6639.455 - 6668.073: 0.1280% ( 6) 00:13:36.113 6668.073 - 6696.692: 0.1886% ( 9) 00:13:36.113 6696.692 - 6725.310: 0.2357% ( 7) 00:13:36.113 6725.310 - 6753.928: 0.2963% ( 9) 00:13:36.113 6753.928 - 6782.547: 0.3502% ( 8) 00:13:36.113 6782.547 - 6811.165: 0.4176% ( 10) 00:13:36.113 6811.165 - 6839.783: 0.4916% ( 11) 00:13:36.113 6839.783 - 6868.402: 0.5523% ( 9) 00:13:36.113 6868.402 - 6897.020: 0.6398% ( 13) 00:13:36.113 6897.020 - 6925.638: 0.7206% ( 12) 00:13:36.113 6925.638 - 6954.257: 0.8149% ( 14) 00:13:36.113 6954.257 - 6982.875: 0.9227% ( 16) 00:13:36.113 6982.875 - 7011.493: 1.0372% ( 17) 00:13:36.113 7011.493 - 7040.112: 1.1651% ( 19) 00:13:36.113 7040.112 - 7068.730: 1.3537% ( 28) 00:13:36.113 7068.730 - 7097.348: 1.5692% ( 32) 00:13:36.113 7097.348 - 7125.967: 1.8117% ( 36) 00:13:36.113 7125.967 - 7154.585: 2.0878% ( 41) 00:13:36.113 7154.585 - 7183.203: 2.4582% ( 55) 00:13:36.113 7183.203 - 7211.822: 2.9432% ( 72) 00:13:36.113 7211.822 - 7240.440: 3.4954% ( 82) 00:13:36.113 7240.440 - 7269.059: 4.1285% ( 94) 00:13:36.113 7269.059 - 7297.677: 4.8155% ( 102) 00:13:36.113 7297.677 - 7326.295: 5.5024% ( 102) 00:13:36.113 7326.295 - 7383.532: 7.0919% ( 236) 00:13:36.113 7383.532 - 7440.769: 9.3548% ( 336) 00:13:36.113 7440.769 - 7498.005: 12.1633% ( 417) 00:13:36.113 7498.005 - 7555.242: 15.7866% ( 538) 00:13:36.113 7555.242 - 7612.479: 19.8276% ( 600) 00:13:36.113 7612.479 - 7669.715: 23.9494% ( 612) 00:13:36.113 7669.715 - 7726.952: 28.5358% ( 681) 00:13:36.113 7726.952 - 7784.189: 33.0011% ( 663) 00:13:36.113 7784.189 - 7841.425: 37.7492% ( 705) 00:13:36.113 7841.425 - 7898.662: 42.4434% ( 697) 00:13:36.113 7898.662 - 7955.899: 47.0164% ( 679) 00:13:36.113 7955.899 - 8013.135: 51.4817% ( 663) 00:13:36.113 8013.135 - 8070.372: 55.8392% ( 647) 00:13:36.113 8070.372 - 8127.609: 59.9003% ( 603) 00:13:36.113 8127.609 - 8184.845: 63.8133% ( 581) 00:13:36.113 8184.845 - 8242.082: 67.4838% ( 545) 00:13:36.113 8242.082 - 8299.319: 70.8244% ( 496) 00:13:36.113 8299.319 - 8356.555: 73.8416% ( 448) 00:13:36.113 8356.555 - 8413.792: 76.1584% ( 344) 00:13:36.113 8413.792 - 8471.029: 78.0374% ( 279) 00:13:36.113 8471.029 - 8528.266: 79.3103% ( 189) 00:13:36.113 8528.266 - 8585.502: 80.2465% ( 139) 00:13:36.113 8585.502 - 8642.739: 80.9537% ( 105) 00:13:36.113 8642.739 - 8699.976: 81.5665% ( 91) 00:13:36.113 8699.976 - 8757.212: 82.0851% ( 77) 00:13:36.113 8757.212 - 8814.449: 82.5566% ( 70) 00:13:36.113 8814.449 - 8871.686: 82.9943% ( 65) 00:13:36.113 8871.686 - 8928.922: 83.3580% ( 54) 00:13:36.113 8928.922 - 8986.159: 83.7689% ( 61) 00:13:36.113 8986.159 - 9043.396: 84.1797% ( 61) 00:13:36.113 9043.396 - 9100.632: 84.5434% ( 54) 00:13:36.113 9100.632 - 9157.869: 84.9273% ( 57) 00:13:36.113 9157.869 - 9215.106: 85.3044% ( 56) 00:13:36.113 9215.106 - 9272.342: 85.6681% ( 54) 00:13:36.113 9272.342 - 9329.579: 86.0183% ( 52) 00:13:36.113 9329.579 - 9386.816: 86.3887% ( 55) 00:13:36.113 9386.816 - 9444.052: 86.7255% ( 50) 00:13:36.113 9444.052 - 9501.289: 87.0555% ( 49) 00:13:36.113 9501.289 - 9558.526: 87.3922% ( 50) 00:13:36.113 9558.526 - 9615.762: 87.7020% ( 46) 00:13:36.113 9615.762 - 9672.999: 88.0725% ( 55) 00:13:36.113 9672.999 - 9730.236: 88.4362% ( 54) 00:13:36.113 9730.236 - 9787.472: 88.7931% ( 53) 00:13:36.113 9787.472 - 9844.709: 89.1231% ( 49) 00:13:36.113 9844.709 - 9901.946: 89.4801% ( 53) 00:13:36.113 9901.946 - 9959.183: 89.7966% ( 47) 00:13:36.113 9959.183 - 10016.419: 90.1131% ( 47) 00:13:36.113 10016.419 - 10073.656: 90.4432% ( 49) 00:13:36.113 10073.656 - 10130.893: 90.7664% ( 48) 00:13:36.113 10130.893 - 10188.129: 91.0897% ( 48) 00:13:36.113 10188.129 - 10245.366: 91.4399% ( 52) 00:13:36.113 10245.366 - 10302.603: 91.7295% ( 43) 00:13:36.113 10302.603 - 10359.839: 92.0326% ( 45) 00:13:36.113 10359.839 - 10417.076: 92.3020% ( 40) 00:13:36.113 10417.076 - 10474.313: 92.5983% ( 44) 00:13:36.113 10474.313 - 10531.549: 92.8879% ( 43) 00:13:36.113 10531.549 - 10588.786: 93.1775% ( 43) 00:13:36.113 10588.786 - 10646.023: 93.4065% ( 34) 00:13:36.113 10646.023 - 10703.259: 93.6018% ( 29) 00:13:36.113 10703.259 - 10760.496: 93.8173% ( 32) 00:13:36.113 10760.496 - 10817.733: 93.9992% ( 27) 00:13:36.113 10817.733 - 10874.969: 94.1608% ( 24) 00:13:36.113 10874.969 - 10932.206: 94.2753% ( 17) 00:13:36.113 10932.206 - 10989.443: 94.3966% ( 18) 00:13:36.113 10989.443 - 11046.679: 94.4908% ( 14) 00:13:36.113 11046.679 - 11103.916: 94.5717% ( 12) 00:13:36.113 11103.916 - 11161.153: 94.6188% ( 7) 00:13:36.113 11161.153 - 11218.390: 94.6659% ( 7) 00:13:36.113 11218.390 - 11275.626: 94.7266% ( 9) 00:13:36.113 11275.626 - 11332.863: 94.7804% ( 8) 00:13:36.113 11332.863 - 11390.100: 94.8276% ( 7) 00:13:36.113 11390.100 - 11447.336: 94.8680% ( 6) 00:13:36.113 11447.336 - 11504.573: 94.9219% ( 8) 00:13:36.113 11504.573 - 11561.810: 94.9623% ( 6) 00:13:36.113 11561.810 - 11619.046: 95.0094% ( 7) 00:13:36.113 11619.046 - 11676.283: 95.0835% ( 11) 00:13:36.113 11676.283 - 11733.520: 95.1307% ( 7) 00:13:36.113 11733.520 - 11790.756: 95.1845% ( 8) 00:13:36.113 11790.756 - 11847.993: 95.2452% ( 9) 00:13:36.113 11847.993 - 11905.230: 95.2990% ( 8) 00:13:36.113 11905.230 - 11962.466: 95.3394% ( 6) 00:13:36.113 11962.466 - 12019.703: 95.3798% ( 6) 00:13:36.113 12019.703 - 12076.940: 95.4203% ( 6) 00:13:36.113 12076.940 - 12134.176: 95.4405% ( 3) 00:13:36.113 12134.176 - 12191.413: 95.4741% ( 5) 00:13:36.113 12191.413 - 12248.650: 95.5078% ( 5) 00:13:36.113 12248.650 - 12305.886: 95.5348% ( 4) 00:13:36.113 12305.886 - 12363.123: 95.5617% ( 4) 00:13:36.113 12363.123 - 12420.360: 95.6021% ( 6) 00:13:36.113 12420.360 - 12477.597: 95.6290% ( 4) 00:13:36.113 12477.597 - 12534.833: 95.6627% ( 5) 00:13:36.113 12534.833 - 12592.070: 95.6964% ( 5) 00:13:36.113 12592.070 - 12649.307: 95.7301% ( 5) 00:13:36.113 12649.307 - 12706.543: 95.7637% ( 5) 00:13:36.113 12706.543 - 12763.780: 95.7974% ( 5) 00:13:36.113 12763.780 - 12821.017: 95.8041% ( 1) 00:13:36.113 12821.017 - 12878.253: 95.8244% ( 3) 00:13:36.113 12878.253 - 12935.490: 95.8378% ( 2) 00:13:36.113 12935.490 - 12992.727: 95.8446% ( 1) 00:13:36.113 12992.727 - 13049.963: 95.8580% ( 2) 00:13:36.113 13049.963 - 13107.200: 95.8648% ( 1) 00:13:36.113 13107.200 - 13164.437: 95.8782% ( 2) 00:13:36.113 13164.437 - 13221.673: 95.8984% ( 3) 00:13:36.113 13221.673 - 13278.910: 95.9254% ( 4) 00:13:36.113 13278.910 - 13336.147: 95.9725% ( 7) 00:13:36.113 13336.147 - 13393.383: 95.9995% ( 4) 00:13:36.113 13393.383 - 13450.620: 96.0264% ( 4) 00:13:36.113 13450.620 - 13507.857: 96.0601% ( 5) 00:13:36.113 13507.857 - 13565.093: 96.0870% ( 4) 00:13:36.113 13565.093 - 13622.330: 96.1207% ( 5) 00:13:36.113 13622.330 - 13679.567: 96.1544% ( 5) 00:13:36.113 13679.567 - 13736.803: 96.1813% ( 4) 00:13:36.113 13736.803 - 13794.040: 96.2082% ( 4) 00:13:36.113 13794.040 - 13851.277: 96.2352% ( 4) 00:13:36.113 13851.277 - 13908.514: 96.2621% ( 4) 00:13:36.113 13908.514 - 13965.750: 96.2891% ( 4) 00:13:36.113 13965.750 - 14022.987: 96.3227% ( 5) 00:13:36.113 14022.987 - 14080.224: 96.3564% ( 5) 00:13:36.113 14080.224 - 14137.460: 96.3834% ( 4) 00:13:36.113 14137.460 - 14194.697: 96.4507% ( 10) 00:13:36.113 14194.697 - 14251.934: 96.5113% ( 9) 00:13:36.113 14251.934 - 14309.170: 96.5854% ( 11) 00:13:36.113 14309.170 - 14366.407: 96.6527% ( 10) 00:13:36.113 14366.407 - 14423.644: 96.7403% ( 13) 00:13:36.113 14423.644 - 14480.880: 96.8077% ( 10) 00:13:36.113 14480.880 - 14538.117: 96.8952% ( 13) 00:13:36.113 14538.117 - 14595.354: 96.9558% ( 9) 00:13:36.113 14595.354 - 14652.590: 97.0097% ( 8) 00:13:36.113 14652.590 - 14767.064: 97.1377% ( 19) 00:13:36.113 14767.064 - 14881.537: 97.2656% ( 19) 00:13:36.113 14881.537 - 14996.010: 97.3936% ( 19) 00:13:36.114 14996.010 - 15110.484: 97.5216% ( 19) 00:13:36.114 15110.484 - 15224.957: 97.5956% ( 11) 00:13:36.114 15224.957 - 15339.431: 97.6495% ( 8) 00:13:36.114 15339.431 - 15453.904: 97.7505% ( 15) 00:13:36.114 15453.904 - 15568.377: 97.8112% ( 9) 00:13:36.114 15568.377 - 15682.851: 97.8852% ( 11) 00:13:36.114 15682.851 - 15797.324: 97.9728% ( 13) 00:13:36.114 15797.324 - 15911.797: 98.0536% ( 12) 00:13:36.114 15911.797 - 16026.271: 98.1412% ( 13) 00:13:36.114 16026.271 - 16140.744: 98.2152% ( 11) 00:13:36.114 16140.744 - 16255.217: 98.2826% ( 10) 00:13:36.114 16255.217 - 16369.691: 98.3567% ( 11) 00:13:36.114 16369.691 - 16484.164: 98.4308% ( 11) 00:13:36.114 16484.164 - 16598.638: 98.4644% ( 5) 00:13:36.114 16598.638 - 16713.111: 98.5116% ( 7) 00:13:36.114 16713.111 - 16827.584: 98.5655% ( 8) 00:13:36.114 16827.584 - 16942.058: 98.6059% ( 6) 00:13:36.114 16942.058 - 17056.531: 98.6530% ( 7) 00:13:36.114 17056.531 - 17171.004: 98.6934% ( 6) 00:13:36.114 17171.004 - 17285.478: 98.7473% ( 8) 00:13:36.114 17285.478 - 17399.951: 98.7810% ( 5) 00:13:36.114 17399.951 - 17514.424: 98.8281% ( 7) 00:13:36.114 17514.424 - 17628.898: 98.8685% ( 6) 00:13:36.114 17628.898 - 17743.371: 98.9089% ( 6) 00:13:36.114 17743.371 - 17857.845: 98.9494% ( 6) 00:13:36.114 17857.845 - 17972.318: 98.9830% ( 5) 00:13:36.114 17972.318 - 18086.791: 99.0167% ( 5) 00:13:36.114 18086.791 - 18201.265: 99.0436% ( 4) 00:13:36.114 18201.265 - 18315.738: 99.0638% ( 3) 00:13:36.114 18315.738 - 18430.211: 99.0908% ( 4) 00:13:36.114 18430.211 - 18544.685: 99.1110% ( 3) 00:13:36.114 18544.685 - 18659.158: 99.1245% ( 2) 00:13:36.114 18659.158 - 18773.631: 99.1379% ( 2) 00:13:36.114 26672.293 - 26786.767: 99.1447% ( 1) 00:13:36.114 26786.767 - 26901.240: 99.1649% ( 3) 00:13:36.114 26901.240 - 27015.714: 99.1918% ( 4) 00:13:36.114 27015.714 - 27130.187: 99.2188% ( 4) 00:13:36.114 27130.187 - 27244.660: 99.2457% ( 4) 00:13:36.114 27244.660 - 27359.134: 99.2726% ( 4) 00:13:36.114 27359.134 - 27473.607: 99.2928% ( 3) 00:13:36.114 27473.607 - 27588.080: 99.3198% ( 4) 00:13:36.114 27588.080 - 27702.554: 99.3400% ( 3) 00:13:36.114 27702.554 - 27817.027: 99.3669% ( 4) 00:13:36.114 27817.027 - 27931.500: 99.3939% ( 4) 00:13:36.114 27931.500 - 28045.974: 99.4141% ( 3) 00:13:36.114 28045.974 - 28160.447: 99.4343% ( 3) 00:13:36.114 28160.447 - 28274.921: 99.4612% ( 4) 00:13:36.114 28274.921 - 28389.394: 99.4881% ( 4) 00:13:36.114 28389.394 - 28503.867: 99.5151% ( 4) 00:13:36.114 28503.867 - 28618.341: 99.5420% ( 4) 00:13:36.114 28618.341 - 28732.814: 99.5622% ( 3) 00:13:36.114 28732.814 - 28847.287: 99.5690% ( 1) 00:13:36.114 34113.062 - 34342.009: 99.6228% ( 8) 00:13:36.114 34342.009 - 34570.955: 99.6700% ( 7) 00:13:36.114 34570.955 - 34799.902: 99.7104% ( 6) 00:13:36.114 34799.902 - 35028.849: 99.7575% ( 7) 00:13:36.114 35028.849 - 35257.796: 99.8114% ( 8) 00:13:36.114 35257.796 - 35486.742: 99.8653% ( 8) 00:13:36.114 35486.742 - 35715.689: 99.9124% ( 7) 00:13:36.114 35715.689 - 35944.636: 99.9596% ( 7) 00:13:36.114 35944.636 - 36173.583: 100.0000% ( 6) 00:13:36.114 00:13:36.114 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:36.114 ============================================================================== 00:13:36.114 Range in us Cumulative IO count 00:13:36.114 6553.600 - 6582.218: 0.0202% ( 3) 00:13:36.114 6582.218 - 6610.837: 0.0404% ( 3) 00:13:36.114 6610.837 - 6639.455: 0.0943% ( 8) 00:13:36.114 6639.455 - 6668.073: 0.1414% ( 7) 00:13:36.114 6668.073 - 6696.692: 0.1953% ( 8) 00:13:36.114 6696.692 - 6725.310: 0.2492% ( 8) 00:13:36.114 6725.310 - 6753.928: 0.3031% ( 8) 00:13:36.114 6753.928 - 6782.547: 0.3704% ( 10) 00:13:36.114 6782.547 - 6811.165: 0.4243% ( 8) 00:13:36.114 6811.165 - 6839.783: 0.4916% ( 10) 00:13:36.114 6839.783 - 6868.402: 0.5590% ( 10) 00:13:36.114 6868.402 - 6897.020: 0.6600% ( 15) 00:13:36.114 6897.020 - 6925.638: 0.7610% ( 15) 00:13:36.114 6925.638 - 6954.257: 0.8688% ( 16) 00:13:36.114 6954.257 - 6982.875: 0.9900% ( 18) 00:13:36.114 6982.875 - 7011.493: 1.1315% ( 21) 00:13:36.114 7011.493 - 7040.112: 1.2796% ( 22) 00:13:36.114 7040.112 - 7068.730: 1.4345% ( 23) 00:13:36.114 7068.730 - 7097.348: 1.6231% ( 28) 00:13:36.114 7097.348 - 7125.967: 1.8790% ( 38) 00:13:36.114 7125.967 - 7154.585: 2.2225% ( 51) 00:13:36.114 7154.585 - 7183.203: 2.5795% ( 53) 00:13:36.114 7183.203 - 7211.822: 3.0577% ( 71) 00:13:36.114 7211.822 - 7240.440: 3.5964% ( 80) 00:13:36.114 7240.440 - 7269.059: 4.1622% ( 84) 00:13:36.114 7269.059 - 7297.677: 4.7818% ( 92) 00:13:36.114 7297.677 - 7326.295: 5.4822% ( 104) 00:13:36.114 7326.295 - 7383.532: 7.2602% ( 264) 00:13:36.114 7383.532 - 7440.769: 9.4962% ( 332) 00:13:36.114 7440.769 - 7498.005: 12.4327% ( 436) 00:13:36.114 7498.005 - 7555.242: 15.8473% ( 507) 00:13:36.114 7555.242 - 7612.479: 19.5784% ( 554) 00:13:36.114 7612.479 - 7669.715: 23.8483% ( 634) 00:13:36.114 7669.715 - 7726.952: 28.4887% ( 689) 00:13:36.114 7726.952 - 7784.189: 32.9876% ( 668) 00:13:36.114 7784.189 - 7841.425: 37.5673% ( 680) 00:13:36.114 7841.425 - 7898.662: 42.2818% ( 700) 00:13:36.114 7898.662 - 7955.899: 46.9491% ( 693) 00:13:36.114 7955.899 - 8013.135: 51.4749% ( 672) 00:13:36.114 8013.135 - 8070.372: 55.8324% ( 647) 00:13:36.114 8070.372 - 8127.609: 59.9609% ( 613) 00:13:36.114 8127.609 - 8184.845: 63.8066% ( 571) 00:13:36.114 8184.845 - 8242.082: 67.4838% ( 546) 00:13:36.114 8242.082 - 8299.319: 70.8109% ( 494) 00:13:36.114 8299.319 - 8356.555: 73.7945% ( 443) 00:13:36.114 8356.555 - 8413.792: 76.0911% ( 341) 00:13:36.114 8413.792 - 8471.029: 77.7883% ( 252) 00:13:36.114 8471.029 - 8528.266: 78.9871% ( 178) 00:13:36.114 8528.266 - 8585.502: 79.8559% ( 129) 00:13:36.114 8585.502 - 8642.739: 80.5024% ( 96) 00:13:36.114 8642.739 - 8699.976: 81.1220% ( 92) 00:13:36.114 8699.976 - 8757.212: 81.6878% ( 84) 00:13:36.114 8757.212 - 8814.449: 82.1525% ( 69) 00:13:36.114 8814.449 - 8871.686: 82.6374% ( 72) 00:13:36.114 8871.686 - 8928.922: 83.0617% ( 63) 00:13:36.114 8928.922 - 8986.159: 83.4658% ( 60) 00:13:36.114 8986.159 - 9043.396: 83.9305% ( 69) 00:13:36.114 9043.396 - 9100.632: 84.2874% ( 53) 00:13:36.114 9100.632 - 9157.869: 84.7050% ( 62) 00:13:36.114 9157.869 - 9215.106: 85.0687% ( 54) 00:13:36.114 9215.106 - 9272.342: 85.4391% ( 55) 00:13:36.114 9272.342 - 9329.579: 85.7826% ( 51) 00:13:36.114 9329.579 - 9386.816: 86.1395% ( 53) 00:13:36.114 9386.816 - 9444.052: 86.4628% ( 48) 00:13:36.114 9444.052 - 9501.289: 86.8130% ( 52) 00:13:36.114 9501.289 - 9558.526: 87.1363% ( 48) 00:13:36.114 9558.526 - 9615.762: 87.4394% ( 45) 00:13:36.114 9615.762 - 9672.999: 87.8165% ( 56) 00:13:36.114 9672.999 - 9730.236: 88.1735% ( 53) 00:13:36.114 9730.236 - 9787.472: 88.5170% ( 51) 00:13:36.114 9787.472 - 9844.709: 88.8470% ( 49) 00:13:36.114 9844.709 - 9901.946: 89.1770% ( 49) 00:13:36.114 9901.946 - 9959.183: 89.4868% ( 46) 00:13:36.114 9959.183 - 10016.419: 89.8303% ( 51) 00:13:36.114 10016.419 - 10073.656: 90.1670% ( 50) 00:13:36.114 10073.656 - 10130.893: 90.5307% ( 54) 00:13:36.114 10130.893 - 10188.129: 90.8809% ( 52) 00:13:36.114 10188.129 - 10245.366: 91.1975% ( 47) 00:13:36.114 10245.366 - 10302.603: 91.5140% ( 47) 00:13:36.114 10302.603 - 10359.839: 91.8508% ( 50) 00:13:36.114 10359.839 - 10417.076: 92.1538% ( 45) 00:13:36.114 10417.076 - 10474.313: 92.4300% ( 41) 00:13:36.114 10474.313 - 10531.549: 92.7128% ( 42) 00:13:36.114 10531.549 - 10588.786: 92.9957% ( 42) 00:13:36.114 10588.786 - 10646.023: 93.2786% ( 42) 00:13:36.114 10646.023 - 10703.259: 93.5547% ( 41) 00:13:36.114 10703.259 - 10760.496: 93.7837% ( 34) 00:13:36.114 10760.496 - 10817.733: 93.9723% ( 28) 00:13:36.114 10817.733 - 10874.969: 94.1002% ( 19) 00:13:36.114 10874.969 - 10932.206: 94.2147% ( 17) 00:13:36.114 10932.206 - 10989.443: 94.3023% ( 13) 00:13:36.114 10989.443 - 11046.679: 94.3898% ( 13) 00:13:36.114 11046.679 - 11103.916: 94.4774% ( 13) 00:13:36.114 11103.916 - 11161.153: 94.5649% ( 13) 00:13:36.114 11161.153 - 11218.390: 94.6727% ( 16) 00:13:36.114 11218.390 - 11275.626: 94.7535% ( 12) 00:13:36.114 11275.626 - 11332.863: 94.8411% ( 13) 00:13:36.114 11332.863 - 11390.100: 94.9219% ( 12) 00:13:36.114 11390.100 - 11447.336: 94.9892% ( 10) 00:13:36.114 11447.336 - 11504.573: 95.0700% ( 12) 00:13:36.114 11504.573 - 11561.810: 95.1441% ( 11) 00:13:36.114 11561.810 - 11619.046: 95.2115% ( 10) 00:13:36.114 11619.046 - 11676.283: 95.2654% ( 8) 00:13:36.114 11676.283 - 11733.520: 95.3125% ( 7) 00:13:36.114 11733.520 - 11790.756: 95.3731% ( 9) 00:13:36.114 11790.756 - 11847.993: 95.4203% ( 7) 00:13:36.114 11847.993 - 11905.230: 95.4674% ( 7) 00:13:36.114 11905.230 - 11962.466: 95.5213% ( 8) 00:13:36.114 11962.466 - 12019.703: 95.5819% ( 9) 00:13:36.114 12019.703 - 12076.940: 95.6223% ( 6) 00:13:36.114 12076.940 - 12134.176: 95.6829% ( 9) 00:13:36.114 12134.176 - 12191.413: 95.7368% ( 8) 00:13:36.114 12191.413 - 12248.650: 95.7907% ( 8) 00:13:36.114 12248.650 - 12305.886: 95.8311% ( 6) 00:13:36.114 12305.886 - 12363.123: 95.8580% ( 4) 00:13:36.114 12363.123 - 12420.360: 95.8782% ( 3) 00:13:36.114 12420.360 - 12477.597: 95.8917% ( 2) 00:13:36.114 12477.597 - 12534.833: 95.9119% ( 3) 00:13:36.114 12534.833 - 12592.070: 95.9456% ( 5) 00:13:36.114 12592.070 - 12649.307: 95.9658% ( 3) 00:13:36.114 12649.307 - 12706.543: 95.9927% ( 4) 00:13:36.114 12706.543 - 12763.780: 96.0129% ( 3) 00:13:36.114 12763.780 - 12821.017: 96.0331% ( 3) 00:13:36.114 12821.017 - 12878.253: 96.0601% ( 4) 00:13:36.114 12878.253 - 12935.490: 96.0870% ( 4) 00:13:36.114 12935.490 - 12992.727: 96.1140% ( 4) 00:13:36.114 12992.727 - 13049.963: 96.1342% ( 3) 00:13:36.114 13049.963 - 13107.200: 96.1544% ( 3) 00:13:36.114 13107.200 - 13164.437: 96.1813% ( 4) 00:13:36.114 13164.437 - 13221.673: 96.2015% ( 3) 00:13:36.114 13221.673 - 13278.910: 96.2284% ( 4) 00:13:36.114 13278.910 - 13336.147: 96.2487% ( 3) 00:13:36.114 13336.147 - 13393.383: 96.2689% ( 3) 00:13:36.114 13393.383 - 13450.620: 96.2891% ( 3) 00:13:36.114 13450.620 - 13507.857: 96.3093% ( 3) 00:13:36.114 13507.857 - 13565.093: 96.3295% ( 3) 00:13:36.114 13565.093 - 13622.330: 96.3564% ( 4) 00:13:36.114 13622.330 - 13679.567: 96.3631% ( 1) 00:13:36.114 13679.567 - 13736.803: 96.3766% ( 2) 00:13:36.114 13736.803 - 13794.040: 96.3901% ( 2) 00:13:36.114 13794.040 - 13851.277: 96.4036% ( 2) 00:13:36.114 13851.277 - 13908.514: 96.4103% ( 1) 00:13:36.114 13908.514 - 13965.750: 96.4238% ( 2) 00:13:36.114 13965.750 - 14022.987: 96.4372% ( 2) 00:13:36.114 14022.987 - 14080.224: 96.4440% ( 1) 00:13:36.114 14080.224 - 14137.460: 96.4574% ( 2) 00:13:36.114 14137.460 - 14194.697: 96.4642% ( 1) 00:13:36.114 14194.697 - 14251.934: 96.4776% ( 2) 00:13:36.114 14251.934 - 14309.170: 96.4911% ( 2) 00:13:36.114 14309.170 - 14366.407: 96.5046% ( 2) 00:13:36.114 14366.407 - 14423.644: 96.5180% ( 2) 00:13:36.114 14423.644 - 14480.880: 96.5450% ( 4) 00:13:36.114 14480.880 - 14538.117: 96.5719% ( 4) 00:13:36.114 14538.117 - 14595.354: 96.5921% ( 3) 00:13:36.114 14595.354 - 14652.590: 96.6258% ( 5) 00:13:36.115 14652.590 - 14767.064: 96.6932% ( 10) 00:13:36.115 14767.064 - 14881.537: 96.8144% ( 18) 00:13:36.115 14881.537 - 14996.010: 96.9154% ( 15) 00:13:36.115 14996.010 - 15110.484: 97.0434% ( 19) 00:13:36.115 15110.484 - 15224.957: 97.1983% ( 23) 00:13:36.115 15224.957 - 15339.431: 97.3666% ( 25) 00:13:36.115 15339.431 - 15453.904: 97.5216% ( 23) 00:13:36.115 15453.904 - 15568.377: 97.6899% ( 25) 00:13:36.115 15568.377 - 15682.851: 97.8448% ( 23) 00:13:36.115 15682.851 - 15797.324: 97.9795% ( 20) 00:13:36.115 15797.324 - 15911.797: 98.1075% ( 19) 00:13:36.115 15911.797 - 16026.271: 98.1883% ( 12) 00:13:36.115 16026.271 - 16140.744: 98.2893% ( 15) 00:13:36.115 16140.744 - 16255.217: 98.3702% ( 12) 00:13:36.115 16255.217 - 16369.691: 98.4644% ( 14) 00:13:36.115 16369.691 - 16484.164: 98.5318% ( 10) 00:13:36.115 16484.164 - 16598.638: 98.6059% ( 11) 00:13:36.115 16598.638 - 16713.111: 98.6665% ( 9) 00:13:36.115 16713.111 - 16827.584: 98.7338% ( 10) 00:13:36.115 16827.584 - 16942.058: 98.8012% ( 10) 00:13:36.115 16942.058 - 17056.531: 98.8551% ( 8) 00:13:36.115 17056.531 - 17171.004: 98.8955% ( 6) 00:13:36.115 17171.004 - 17285.478: 98.9359% ( 6) 00:13:36.115 17285.478 - 17399.951: 98.9763% ( 6) 00:13:36.115 17399.951 - 17514.424: 99.0167% ( 6) 00:13:36.115 17514.424 - 17628.898: 99.0436% ( 4) 00:13:36.115 17628.898 - 17743.371: 99.0638% ( 3) 00:13:36.115 17743.371 - 17857.845: 99.0841% ( 3) 00:13:36.115 17857.845 - 17972.318: 99.1043% ( 3) 00:13:36.115 17972.318 - 18086.791: 99.1245% ( 3) 00:13:36.115 18086.791 - 18201.265: 99.1379% ( 2) 00:13:36.115 24382.826 - 24497.300: 99.1447% ( 1) 00:13:36.115 24497.300 - 24611.773: 99.1649% ( 3) 00:13:36.115 24611.773 - 24726.246: 99.1851% ( 3) 00:13:36.115 24726.246 - 24840.720: 99.2120% ( 4) 00:13:36.115 24840.720 - 24955.193: 99.2322% ( 3) 00:13:36.115 24955.193 - 25069.666: 99.2592% ( 4) 00:13:36.115 25069.666 - 25184.140: 99.2861% ( 4) 00:13:36.115 25184.140 - 25298.613: 99.3063% ( 3) 00:13:36.115 25298.613 - 25413.086: 99.3265% ( 3) 00:13:36.115 25413.086 - 25527.560: 99.3534% ( 4) 00:13:36.115 25527.560 - 25642.033: 99.3804% ( 4) 00:13:36.115 25642.033 - 25756.507: 99.4006% ( 3) 00:13:36.115 25756.507 - 25870.980: 99.4343% ( 5) 00:13:36.115 25870.980 - 25985.453: 99.4477% ( 2) 00:13:36.115 25985.453 - 26099.927: 99.4747% ( 4) 00:13:36.115 26099.927 - 26214.400: 99.5084% ( 5) 00:13:36.115 26214.400 - 26328.873: 99.5353% ( 4) 00:13:36.115 26328.873 - 26443.347: 99.5555% ( 3) 00:13:36.115 26443.347 - 26557.820: 99.5690% ( 2) 00:13:36.115 31823.595 - 32052.541: 99.6161% ( 7) 00:13:36.115 32052.541 - 32281.488: 99.6700% ( 8) 00:13:36.115 32281.488 - 32510.435: 99.7104% ( 6) 00:13:36.115 32510.435 - 32739.382: 99.7575% ( 7) 00:13:36.115 32739.382 - 32968.328: 99.8047% ( 7) 00:13:36.115 32968.328 - 33197.275: 99.8518% ( 7) 00:13:36.115 33197.275 - 33426.222: 99.8990% ( 7) 00:13:36.115 33426.222 - 33655.169: 99.9461% ( 7) 00:13:36.115 33655.169 - 33884.115: 100.0000% ( 8) 00:13:36.115 00:13:36.115 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:36.115 ============================================================================== 00:13:36.115 Range in us Cumulative IO count 00:13:36.115 6553.600 - 6582.218: 0.0201% ( 3) 00:13:36.115 6582.218 - 6610.837: 0.0335% ( 2) 00:13:36.115 6610.837 - 6639.455: 0.0604% ( 4) 00:13:36.115 6639.455 - 6668.073: 0.1006% ( 6) 00:13:36.115 6668.073 - 6696.692: 0.1609% ( 9) 00:13:36.115 6696.692 - 6725.310: 0.2079% ( 7) 00:13:36.115 6725.310 - 6753.928: 0.2749% ( 10) 00:13:36.115 6753.928 - 6782.547: 0.3487% ( 11) 00:13:36.115 6782.547 - 6811.165: 0.4292% ( 12) 00:13:36.115 6811.165 - 6839.783: 0.5030% ( 11) 00:13:36.115 6839.783 - 6868.402: 0.5834% ( 12) 00:13:36.115 6868.402 - 6897.020: 0.6639% ( 12) 00:13:36.115 6897.020 - 6925.638: 0.7511% ( 13) 00:13:36.115 6925.638 - 6954.257: 0.8517% ( 15) 00:13:36.115 6954.257 - 6982.875: 0.9590% ( 16) 00:13:36.115 6982.875 - 7011.493: 1.0730% ( 17) 00:13:36.115 7011.493 - 7040.112: 1.2138% ( 21) 00:13:36.115 7040.112 - 7068.730: 1.3814% ( 25) 00:13:36.115 7068.730 - 7097.348: 1.5826% ( 30) 00:13:36.115 7097.348 - 7125.967: 1.8844% ( 45) 00:13:36.115 7125.967 - 7154.585: 2.2264% ( 51) 00:13:36.115 7154.585 - 7183.203: 2.5483% ( 48) 00:13:36.115 7183.203 - 7211.822: 3.0445% ( 74) 00:13:36.115 7211.822 - 7240.440: 3.5609% ( 77) 00:13:36.115 7240.440 - 7269.059: 4.2583% ( 104) 00:13:36.115 7269.059 - 7297.677: 4.8820% ( 93) 00:13:36.115 7297.677 - 7326.295: 5.5794% ( 104) 00:13:36.115 7326.295 - 7383.532: 7.2961% ( 256) 00:13:36.115 7383.532 - 7440.769: 9.5091% ( 330) 00:13:36.115 7440.769 - 7498.005: 12.3860% ( 429) 00:13:36.115 7498.005 - 7555.242: 15.8262% ( 513) 00:13:36.115 7555.242 - 7612.479: 19.6486% ( 570) 00:13:36.115 7612.479 - 7669.715: 23.8332% ( 624) 00:13:36.115 7669.715 - 7726.952: 28.2524% ( 659) 00:13:36.115 7726.952 - 7784.189: 32.9265% ( 697) 00:13:36.115 7784.189 - 7841.425: 37.6274% ( 701) 00:13:36.115 7841.425 - 7898.662: 42.1137% ( 669) 00:13:36.115 7898.662 - 7955.899: 46.8616% ( 708) 00:13:36.115 7955.899 - 8013.135: 51.2808% ( 659) 00:13:36.115 8013.135 - 8070.372: 55.6934% ( 658) 00:13:36.115 8070.372 - 8127.609: 59.7572% ( 606) 00:13:36.115 8127.609 - 8184.845: 63.5998% ( 573) 00:13:36.115 8184.845 - 8242.082: 67.2210% ( 540) 00:13:36.115 8242.082 - 8299.319: 70.5539% ( 497) 00:13:36.115 8299.319 - 8356.555: 73.5113% ( 441) 00:13:36.115 8356.555 - 8413.792: 75.8785% ( 353) 00:13:36.115 8413.792 - 8471.029: 77.6153% ( 259) 00:13:36.115 8471.029 - 8528.266: 78.8224% ( 180) 00:13:36.115 8528.266 - 8585.502: 79.7210% ( 134) 00:13:36.115 8585.502 - 8642.739: 80.5123% ( 118) 00:13:36.115 8642.739 - 8699.976: 81.1025% ( 88) 00:13:36.115 8699.976 - 8757.212: 81.5987% ( 74) 00:13:36.115 8757.212 - 8814.449: 82.0346% ( 65) 00:13:36.115 8814.449 - 8871.686: 82.4236% ( 58) 00:13:36.115 8871.686 - 8928.922: 82.8326% ( 61) 00:13:36.115 8928.922 - 8986.159: 83.1746% ( 51) 00:13:36.115 8986.159 - 9043.396: 83.5971% ( 63) 00:13:36.115 9043.396 - 9100.632: 83.9928% ( 59) 00:13:36.115 9100.632 - 9157.869: 84.4152% ( 63) 00:13:36.115 9157.869 - 9215.106: 84.7774% ( 54) 00:13:36.115 9215.106 - 9272.342: 85.1060% ( 49) 00:13:36.115 9272.342 - 9329.579: 85.4144% ( 46) 00:13:36.115 9329.579 - 9386.816: 85.7497% ( 50) 00:13:36.115 9386.816 - 9444.052: 86.1186% ( 55) 00:13:36.115 9444.052 - 9501.289: 86.4337% ( 47) 00:13:36.115 9501.289 - 9558.526: 86.7690% ( 50) 00:13:36.115 9558.526 - 9615.762: 87.1043% ( 50) 00:13:36.115 9615.762 - 9672.999: 87.4128% ( 46) 00:13:36.115 9672.999 - 9730.236: 87.7615% ( 52) 00:13:36.115 9730.236 - 9787.472: 88.0432% ( 42) 00:13:36.115 9787.472 - 9844.709: 88.3785% ( 50) 00:13:36.115 9844.709 - 9901.946: 88.6803% ( 45) 00:13:36.115 9901.946 - 9959.183: 89.0156% ( 50) 00:13:36.115 9959.183 - 10016.419: 89.2905% ( 41) 00:13:36.115 10016.419 - 10073.656: 89.6325% ( 51) 00:13:36.115 10073.656 - 10130.893: 89.9209% ( 43) 00:13:36.115 10130.893 - 10188.129: 90.2226% ( 45) 00:13:36.115 10188.129 - 10245.366: 90.4909% ( 40) 00:13:36.115 10245.366 - 10302.603: 90.7725% ( 42) 00:13:36.115 10302.603 - 10359.839: 91.0743% ( 45) 00:13:36.115 10359.839 - 10417.076: 91.3694% ( 44) 00:13:36.115 10417.076 - 10474.313: 91.6376% ( 40) 00:13:36.115 10474.313 - 10531.549: 91.9058% ( 40) 00:13:36.115 10531.549 - 10588.786: 92.1607% ( 38) 00:13:36.115 10588.786 - 10646.023: 92.4356% ( 41) 00:13:36.115 10646.023 - 10703.259: 92.6770% ( 36) 00:13:36.115 10703.259 - 10760.496: 92.9117% ( 35) 00:13:36.115 10760.496 - 10817.733: 93.1062% ( 29) 00:13:36.115 10817.733 - 10874.969: 93.2873% ( 27) 00:13:36.115 10874.969 - 10932.206: 93.4482% ( 24) 00:13:36.115 10932.206 - 10989.443: 93.6159% ( 25) 00:13:36.115 10989.443 - 11046.679: 93.7768% ( 24) 00:13:36.115 11046.679 - 11103.916: 93.9311% ( 23) 00:13:36.115 11103.916 - 11161.153: 94.0585% ( 19) 00:13:36.115 11161.153 - 11218.390: 94.1859% ( 19) 00:13:36.115 11218.390 - 11275.626: 94.3066% ( 18) 00:13:36.115 11275.626 - 11332.863: 94.4474% ( 21) 00:13:36.115 11332.863 - 11390.100: 94.5815% ( 20) 00:13:36.115 11390.100 - 11447.336: 94.7090% ( 19) 00:13:36.115 11447.336 - 11504.573: 94.8297% ( 18) 00:13:36.115 11504.573 - 11561.810: 94.9906% ( 24) 00:13:36.115 11561.810 - 11619.046: 95.0979% ( 16) 00:13:36.115 11619.046 - 11676.283: 95.2052% ( 16) 00:13:36.115 11676.283 - 11733.520: 95.2991% ( 14) 00:13:36.115 11733.520 - 11790.756: 95.3997% ( 15) 00:13:36.115 11790.756 - 11847.993: 95.5137% ( 17) 00:13:36.115 11847.993 - 11905.230: 95.6076% ( 14) 00:13:36.115 11905.230 - 11962.466: 95.6612% ( 8) 00:13:36.115 11962.466 - 12019.703: 95.7350% ( 11) 00:13:36.115 12019.703 - 12076.940: 95.7886% ( 8) 00:13:36.115 12076.940 - 12134.176: 95.8490% ( 9) 00:13:36.115 12134.176 - 12191.413: 95.8959% ( 7) 00:13:36.115 12191.413 - 12248.650: 95.9563% ( 9) 00:13:36.115 12248.650 - 12305.886: 96.0099% ( 8) 00:13:36.115 12305.886 - 12363.123: 96.0636% ( 8) 00:13:36.115 12363.123 - 12420.360: 96.1239% ( 9) 00:13:36.115 12420.360 - 12477.597: 96.1709% ( 7) 00:13:36.115 12477.597 - 12534.833: 96.2044% ( 5) 00:13:36.115 12534.833 - 12592.070: 96.2513% ( 7) 00:13:36.115 12592.070 - 12649.307: 96.2983% ( 7) 00:13:36.115 12649.307 - 12706.543: 96.3452% ( 7) 00:13:36.115 12706.543 - 12763.780: 96.3788% ( 5) 00:13:36.115 12763.780 - 12821.017: 96.3855% ( 1) 00:13:36.115 12821.017 - 12878.253: 96.3989% ( 2) 00:13:36.115 12878.253 - 12935.490: 96.4123% ( 2) 00:13:36.115 12935.490 - 12992.727: 96.4190% ( 1) 00:13:36.115 12992.727 - 13049.963: 96.4324% ( 2) 00:13:36.115 13049.963 - 13107.200: 96.4391% ( 1) 00:13:36.115 13107.200 - 13164.437: 96.4525% ( 2) 00:13:36.115 13164.437 - 13221.673: 96.4592% ( 1) 00:13:36.115 13221.673 - 13278.910: 96.4659% ( 1) 00:13:36.115 13278.910 - 13336.147: 96.4793% ( 2) 00:13:36.115 13336.147 - 13393.383: 96.4861% ( 1) 00:13:36.115 13393.383 - 13450.620: 96.4928% ( 1) 00:13:36.115 13450.620 - 13507.857: 96.5062% ( 2) 00:13:36.115 13507.857 - 13565.093: 96.5196% ( 2) 00:13:36.115 13565.093 - 13622.330: 96.5330% ( 2) 00:13:36.115 13622.330 - 13679.567: 96.5397% ( 1) 00:13:36.115 13679.567 - 13736.803: 96.5531% ( 2) 00:13:36.115 13736.803 - 13794.040: 96.5665% ( 2) 00:13:36.115 14423.644 - 14480.880: 96.5799% ( 2) 00:13:36.115 14480.880 - 14538.117: 96.6001% ( 3) 00:13:36.115 14538.117 - 14595.354: 96.6336% ( 5) 00:13:36.115 14595.354 - 14652.590: 96.6604% ( 4) 00:13:36.115 14652.590 - 14767.064: 96.7275% ( 10) 00:13:36.115 14767.064 - 14881.537: 96.7945% ( 10) 00:13:36.115 14881.537 - 14996.010: 96.8549% ( 9) 00:13:36.115 14996.010 - 15110.484: 96.9219% ( 10) 00:13:36.115 15110.484 - 15224.957: 97.0225% ( 15) 00:13:36.115 15224.957 - 15339.431: 97.1298% ( 16) 00:13:36.115 15339.431 - 15453.904: 97.2774% ( 22) 00:13:36.115 15453.904 - 15568.377: 97.4249% ( 22) 00:13:36.115 15568.377 - 15682.851: 97.5657% ( 21) 00:13:36.115 15682.851 - 15797.324: 97.6663% ( 15) 00:13:36.115 15797.324 - 15911.797: 97.7736% ( 16) 00:13:36.115 15911.797 - 16026.271: 97.9010% ( 19) 00:13:36.115 16026.271 - 16140.744: 98.0284% ( 19) 00:13:36.115 16140.744 - 16255.217: 98.1827% ( 23) 00:13:36.115 16255.217 - 16369.691: 98.2900% ( 16) 00:13:36.115 16369.691 - 16484.164: 98.3839% ( 14) 00:13:36.115 16484.164 - 16598.638: 98.4979% ( 17) 00:13:36.116 16598.638 - 16713.111: 98.6186% ( 18) 00:13:36.116 16713.111 - 16827.584: 98.7326% ( 17) 00:13:36.116 16827.584 - 16942.058: 98.8466% ( 17) 00:13:36.116 16942.058 - 17056.531: 98.9337% ( 13) 00:13:36.116 17056.531 - 17171.004: 99.0075% ( 11) 00:13:36.116 17171.004 - 17285.478: 99.0880% ( 12) 00:13:36.116 17285.478 - 17399.951: 99.1685% ( 12) 00:13:36.116 17399.951 - 17514.424: 99.2355% ( 10) 00:13:36.116 17514.424 - 17628.898: 99.2892% ( 8) 00:13:36.116 17628.898 - 17743.371: 99.3428% ( 8) 00:13:36.116 17743.371 - 17857.845: 99.3898% ( 7) 00:13:36.116 17857.845 - 17972.318: 99.4367% ( 7) 00:13:36.116 17972.318 - 18086.791: 99.4836% ( 7) 00:13:36.116 18086.791 - 18201.265: 99.5105% ( 4) 00:13:36.116 18201.265 - 18315.738: 99.5373% ( 4) 00:13:36.116 18315.738 - 18430.211: 99.5574% ( 3) 00:13:36.116 18430.211 - 18544.685: 99.5708% ( 2) 00:13:36.116 23695.986 - 23810.459: 99.5775% ( 1) 00:13:36.116 23810.459 - 23924.933: 99.6043% ( 4) 00:13:36.116 23924.933 - 24039.406: 99.6312% ( 4) 00:13:36.116 24039.406 - 24153.879: 99.6580% ( 4) 00:13:36.116 24153.879 - 24268.353: 99.6848% ( 4) 00:13:36.116 24268.353 - 24382.826: 99.7116% ( 4) 00:13:36.116 24382.826 - 24497.300: 99.7385% ( 4) 00:13:36.116 24497.300 - 24611.773: 99.7586% ( 3) 00:13:36.116 24611.773 - 24726.246: 99.7854% ( 4) 00:13:36.116 24726.246 - 24840.720: 99.8122% ( 4) 00:13:36.116 24840.720 - 24955.193: 99.8391% ( 4) 00:13:36.116 24955.193 - 25069.666: 99.8592% ( 3) 00:13:36.116 25069.666 - 25184.140: 99.8860% ( 4) 00:13:36.116 25184.140 - 25298.613: 99.9128% ( 4) 00:13:36.116 25298.613 - 25413.086: 99.9329% ( 3) 00:13:36.116 25413.086 - 25527.560: 99.9598% ( 4) 00:13:36.116 25527.560 - 25642.033: 99.9866% ( 4) 00:13:36.116 25642.033 - 25756.507: 100.0000% ( 2) 00:13:36.116 00:13:36.116 18:17:29 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:13:37.497 Initializing NVMe Controllers 00:13:37.497 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:37.497 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:37.497 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:37.497 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:37.497 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:37.497 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:37.497 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:37.497 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:37.497 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:37.497 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:37.497 Initialization complete. Launching workers. 00:13:37.497 ======================================================== 00:13:37.497 Latency(us) 00:13:37.497 Device Information : IOPS MiB/s Average min max 00:13:37.497 PCIE (0000:00:10.0) NSID 1 from core 0: 9002.55 105.50 14284.08 9122.08 47262.12 00:13:37.497 PCIE (0000:00:11.0) NSID 1 from core 0: 9002.55 105.50 14269.12 9117.71 45711.06 00:13:37.497 PCIE (0000:00:13.0) NSID 1 from core 0: 9002.55 105.50 14254.09 9392.16 44823.78 00:13:37.497 PCIE (0000:00:12.0) NSID 1 from core 0: 9002.55 105.50 14240.67 9272.09 43341.66 00:13:37.497 PCIE (0000:00:12.0) NSID 2 from core 0: 9002.55 105.50 14225.33 9319.35 42359.01 00:13:37.497 PCIE (0000:00:12.0) NSID 3 from core 0: 9002.55 105.50 14206.91 9345.50 40731.80 00:13:37.497 ======================================================== 00:13:37.497 Total : 54015.28 632.99 14246.70 9117.71 47262.12 00:13:37.497 00:13:37.497 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:37.497 ================================================================================= 00:13:37.497 1.00000% : 9672.999us 00:13:37.497 10.00000% : 10817.733us 00:13:37.497 25.00000% : 12420.360us 00:13:37.497 50.00000% : 13565.093us 00:13:37.497 75.00000% : 15797.324us 00:13:37.497 90.00000% : 17743.371us 00:13:37.497 95.00000% : 18544.685us 00:13:37.497 98.00000% : 20834.152us 00:13:37.497 99.00000% : 31823.595us 00:13:37.497 99.50000% : 45102.505us 00:13:37.497 99.90000% : 46934.079us 00:13:37.497 99.99000% : 47391.972us 00:13:37.497 99.99900% : 47391.972us 00:13:37.497 99.99990% : 47391.972us 00:13:37.497 99.99999% : 47391.972us 00:13:37.497 00:13:37.497 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:37.497 ================================================================================= 00:13:37.497 1.00000% : 9558.526us 00:13:37.497 10.00000% : 10760.496us 00:13:37.497 25.00000% : 12363.123us 00:13:37.497 50.00000% : 13507.857us 00:13:37.497 75.00000% : 15911.797us 00:13:37.497 90.00000% : 17743.371us 00:13:37.497 95.00000% : 18659.158us 00:13:37.497 98.00000% : 19231.525us 00:13:37.497 99.00000% : 32968.328us 00:13:37.497 99.50000% : 44186.718us 00:13:37.497 99.90000% : 45560.398us 00:13:37.497 99.99000% : 45789.345us 00:13:37.497 99.99900% : 45789.345us 00:13:37.497 99.99990% : 45789.345us 00:13:37.497 99.99999% : 45789.345us 00:13:37.497 00:13:37.497 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:37.497 ================================================================================= 00:13:37.497 1.00000% : 9844.709us 00:13:37.497 10.00000% : 10703.259us 00:13:37.497 25.00000% : 12477.597us 00:13:37.497 50.00000% : 13565.093us 00:13:37.497 75.00000% : 15911.797us 00:13:37.497 90.00000% : 17285.478us 00:13:37.497 95.00000% : 18315.738us 00:13:37.497 98.00000% : 19345.998us 00:13:37.497 99.00000% : 32281.488us 00:13:37.497 99.50000% : 43270.931us 00:13:37.497 99.90000% : 44644.611us 00:13:37.497 99.99000% : 44873.558us 00:13:37.497 99.99900% : 44873.558us 00:13:37.497 99.99990% : 44873.558us 00:13:37.497 99.99999% : 44873.558us 00:13:37.497 00:13:37.497 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:37.497 ================================================================================= 00:13:37.497 1.00000% : 9730.236us 00:13:37.497 10.00000% : 10703.259us 00:13:37.497 25.00000% : 12420.360us 00:13:37.497 50.00000% : 13565.093us 00:13:37.497 75.00000% : 15911.797us 00:13:37.497 90.00000% : 17514.424us 00:13:37.497 95.00000% : 18086.791us 00:13:37.497 98.00000% : 19803.892us 00:13:37.497 99.00000% : 31365.701us 00:13:37.497 99.50000% : 41668.304us 00:13:37.497 99.90000% : 43041.984us 00:13:37.497 99.99000% : 43499.878us 00:13:37.497 99.99900% : 43499.878us 00:13:37.497 99.99990% : 43499.878us 00:13:37.497 99.99999% : 43499.878us 00:13:37.497 00:13:37.497 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:37.497 ================================================================================= 00:13:37.497 1.00000% : 9844.709us 00:13:37.497 10.00000% : 10646.023us 00:13:37.497 25.00000% : 12420.360us 00:13:37.497 50.00000% : 13622.330us 00:13:37.497 75.00000% : 15797.324us 00:13:37.497 90.00000% : 17399.951us 00:13:37.497 95.00000% : 18315.738us 00:13:37.497 98.00000% : 19574.945us 00:13:37.497 99.00000% : 30220.968us 00:13:37.497 99.50000% : 40752.517us 00:13:37.497 99.90000% : 42126.197us 00:13:37.497 99.99000% : 42584.091us 00:13:37.497 99.99900% : 42584.091us 00:13:37.497 99.99990% : 42584.091us 00:13:37.497 99.99999% : 42584.091us 00:13:37.497 00:13:37.497 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:37.497 ================================================================================= 00:13:37.497 1.00000% : 9844.709us 00:13:37.497 10.00000% : 10646.023us 00:13:37.497 25.00000% : 12477.597us 00:13:37.497 50.00000% : 13565.093us 00:13:37.497 75.00000% : 15797.324us 00:13:37.497 90.00000% : 17628.898us 00:13:37.497 95.00000% : 18544.685us 00:13:37.497 98.00000% : 19803.892us 00:13:37.497 99.00000% : 29763.074us 00:13:37.497 99.50000% : 38463.050us 00:13:37.497 99.90000% : 40523.570us 00:13:37.497 99.99000% : 40752.517us 00:13:37.497 99.99900% : 40752.517us 00:13:37.497 99.99990% : 40752.517us 00:13:37.497 99.99999% : 40752.517us 00:13:37.497 00:13:37.497 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:37.497 ============================================================================== 00:13:37.497 Range in us Cumulative IO count 00:13:37.497 9100.632 - 9157.869: 0.0332% ( 3) 00:13:37.497 9157.869 - 9215.106: 0.0997% ( 6) 00:13:37.497 9215.106 - 9272.342: 0.1441% ( 4) 00:13:37.497 9272.342 - 9329.579: 0.2438% ( 9) 00:13:37.497 9329.579 - 9386.816: 0.3214% ( 7) 00:13:37.497 9386.816 - 9444.052: 0.5319% ( 19) 00:13:37.497 9444.052 - 9501.289: 0.6316% ( 9) 00:13:37.497 9501.289 - 9558.526: 0.7646% ( 12) 00:13:37.497 9558.526 - 9615.762: 0.9641% ( 18) 00:13:37.497 9615.762 - 9672.999: 1.0860% ( 11) 00:13:37.497 9672.999 - 9730.236: 1.3076% ( 20) 00:13:37.497 9730.236 - 9787.472: 1.6401% ( 30) 00:13:37.497 9787.472 - 9844.709: 2.0723% ( 39) 00:13:37.497 9844.709 - 9901.946: 2.6263% ( 50) 00:13:37.497 9901.946 - 9959.183: 3.2580% ( 57) 00:13:37.497 9959.183 - 10016.419: 3.8010% ( 49) 00:13:37.497 10016.419 - 10073.656: 4.2886% ( 44) 00:13:37.497 10073.656 - 10130.893: 4.7207% ( 39) 00:13:37.497 10130.893 - 10188.129: 5.1197% ( 36) 00:13:37.497 10188.129 - 10245.366: 5.4189% ( 27) 00:13:37.497 10245.366 - 10302.603: 5.7846% ( 33) 00:13:37.497 10302.603 - 10359.839: 6.1392% ( 32) 00:13:37.497 10359.839 - 10417.076: 6.5824% ( 40) 00:13:37.497 10417.076 - 10474.313: 7.1033% ( 47) 00:13:37.497 10474.313 - 10531.549: 7.5909% ( 44) 00:13:37.497 10531.549 - 10588.786: 8.1449% ( 50) 00:13:37.497 10588.786 - 10646.023: 8.7101% ( 51) 00:13:37.497 10646.023 - 10703.259: 9.2088% ( 45) 00:13:37.497 10703.259 - 10760.496: 9.7407% ( 48) 00:13:37.497 10760.496 - 10817.733: 10.2837% ( 49) 00:13:37.497 10817.733 - 10874.969: 10.7602% ( 43) 00:13:37.497 10874.969 - 10932.206: 11.3254% ( 51) 00:13:37.497 10932.206 - 10989.443: 11.9459% ( 56) 00:13:37.497 10989.443 - 11046.679: 12.4224% ( 43) 00:13:37.497 11046.679 - 11103.916: 12.9433% ( 47) 00:13:37.497 11103.916 - 11161.153: 13.3644% ( 38) 00:13:37.498 11161.153 - 11218.390: 13.6303% ( 24) 00:13:37.498 11218.390 - 11275.626: 13.9628% ( 30) 00:13:37.498 11275.626 - 11332.863: 14.3395% ( 34) 00:13:37.498 11332.863 - 11390.100: 14.5833% ( 22) 00:13:37.498 11390.100 - 11447.336: 14.7606% ( 16) 00:13:37.498 11447.336 - 11504.573: 14.9047% ( 13) 00:13:37.498 11504.573 - 11561.810: 15.0820% ( 16) 00:13:37.498 11561.810 - 11619.046: 15.3036% ( 20) 00:13:37.498 11619.046 - 11676.283: 15.6028% ( 27) 00:13:37.498 11676.283 - 11733.520: 15.9242% ( 29) 00:13:37.498 11733.520 - 11790.756: 16.5337% ( 55) 00:13:37.498 11790.756 - 11847.993: 17.2207% ( 62) 00:13:37.498 11847.993 - 11905.230: 18.0851% ( 78) 00:13:37.498 11905.230 - 11962.466: 19.0714% ( 89) 00:13:37.498 11962.466 - 12019.703: 19.9357% ( 78) 00:13:37.498 12019.703 - 12076.940: 20.5785% ( 58) 00:13:37.498 12076.940 - 12134.176: 21.3874% ( 73) 00:13:37.498 12134.176 - 12191.413: 22.2185% ( 75) 00:13:37.498 12191.413 - 12248.650: 23.0607% ( 76) 00:13:37.498 12248.650 - 12305.886: 23.8143% ( 68) 00:13:37.498 12305.886 - 12363.123: 24.8005% ( 89) 00:13:37.498 12363.123 - 12420.360: 25.8422% ( 94) 00:13:37.498 12420.360 - 12477.597: 26.9614% ( 101) 00:13:37.498 12477.597 - 12534.833: 28.2469% ( 116) 00:13:37.498 12534.833 - 12592.070: 29.5324% ( 116) 00:13:37.498 12592.070 - 12649.307: 30.9951% ( 132) 00:13:37.498 12649.307 - 12706.543: 32.1919% ( 108) 00:13:37.498 12706.543 - 12763.780: 33.4109% ( 110) 00:13:37.498 12763.780 - 12821.017: 34.7518% ( 121) 00:13:37.498 12821.017 - 12878.253: 36.1370% ( 125) 00:13:37.498 12878.253 - 12935.490: 37.5887% ( 131) 00:13:37.498 12935.490 - 12992.727: 38.9517% ( 123) 00:13:37.498 12992.727 - 13049.963: 40.3701% ( 128) 00:13:37.498 13049.963 - 13107.200: 41.9215% ( 140) 00:13:37.498 13107.200 - 13164.437: 43.2070% ( 116) 00:13:37.498 13164.437 - 13221.673: 44.3152% ( 100) 00:13:37.498 13221.673 - 13278.910: 45.2903% ( 88) 00:13:37.498 13278.910 - 13336.147: 46.4871% ( 108) 00:13:37.498 13336.147 - 13393.383: 47.5177% ( 93) 00:13:37.498 13393.383 - 13450.620: 48.4264% ( 82) 00:13:37.498 13450.620 - 13507.857: 49.5346% ( 100) 00:13:37.498 13507.857 - 13565.093: 50.4876% ( 86) 00:13:37.498 13565.093 - 13622.330: 51.3298% ( 76) 00:13:37.498 13622.330 - 13679.567: 52.2385% ( 82) 00:13:37.498 13679.567 - 13736.803: 53.1804% ( 85) 00:13:37.498 13736.803 - 13794.040: 54.0891% ( 82) 00:13:37.498 13794.040 - 13851.277: 54.7318% ( 58) 00:13:37.498 13851.277 - 13908.514: 55.5297% ( 72) 00:13:37.498 13908.514 - 13965.750: 56.2611% ( 66) 00:13:37.498 13965.750 - 14022.987: 57.1254% ( 78) 00:13:37.498 14022.987 - 14080.224: 57.9344% ( 73) 00:13:37.498 14080.224 - 14137.460: 58.7988% ( 78) 00:13:37.498 14137.460 - 14194.697: 59.4747% ( 61) 00:13:37.498 14194.697 - 14251.934: 60.1950% ( 65) 00:13:37.498 14251.934 - 14309.170: 61.0040% ( 73) 00:13:37.498 14309.170 - 14366.407: 61.8351% ( 75) 00:13:37.498 14366.407 - 14423.644: 62.6219% ( 71) 00:13:37.498 14423.644 - 14480.880: 63.2092% ( 53) 00:13:37.498 14480.880 - 14538.117: 63.8076% ( 54) 00:13:37.498 14538.117 - 14595.354: 64.2952% ( 44) 00:13:37.498 14595.354 - 14652.590: 64.7274% ( 39) 00:13:37.498 14652.590 - 14767.064: 65.6472% ( 83) 00:13:37.498 14767.064 - 14881.537: 66.7110% ( 96) 00:13:37.498 14881.537 - 14996.010: 67.6973% ( 89) 00:13:37.498 14996.010 - 15110.484: 68.6835% ( 89) 00:13:37.498 15110.484 - 15224.957: 69.5590% ( 79) 00:13:37.498 15224.957 - 15339.431: 70.7114% ( 104) 00:13:37.498 15339.431 - 15453.904: 71.6755% ( 87) 00:13:37.498 15453.904 - 15568.377: 73.0053% ( 120) 00:13:37.498 15568.377 - 15682.851: 74.2686% ( 114) 00:13:37.498 15682.851 - 15797.324: 75.6427% ( 124) 00:13:37.498 15797.324 - 15911.797: 77.0833% ( 130) 00:13:37.498 15911.797 - 16026.271: 78.1139% ( 93) 00:13:37.498 16026.271 - 16140.744: 79.1334% ( 92) 00:13:37.498 16140.744 - 16255.217: 79.9535% ( 74) 00:13:37.498 16255.217 - 16369.691: 81.0284% ( 97) 00:13:37.498 16369.691 - 16484.164: 81.9371% ( 82) 00:13:37.498 16484.164 - 16598.638: 82.5909% ( 59) 00:13:37.498 16598.638 - 16713.111: 83.2779% ( 62) 00:13:37.498 16713.111 - 16827.584: 84.1977% ( 83) 00:13:37.498 16827.584 - 16942.058: 85.2283% ( 93) 00:13:37.498 16942.058 - 17056.531: 86.0594% ( 75) 00:13:37.498 17056.531 - 17171.004: 86.7354% ( 61) 00:13:37.498 17171.004 - 17285.478: 87.3670% ( 57) 00:13:37.498 17285.478 - 17399.951: 87.9543% ( 53) 00:13:37.498 17399.951 - 17514.424: 88.7190% ( 69) 00:13:37.498 17514.424 - 17628.898: 89.4836% ( 69) 00:13:37.498 17628.898 - 17743.371: 90.3590% ( 79) 00:13:37.498 17743.371 - 17857.845: 91.2899% ( 84) 00:13:37.498 17857.845 - 17972.318: 92.0102% ( 65) 00:13:37.498 17972.318 - 18086.791: 92.7748% ( 69) 00:13:37.498 18086.791 - 18201.265: 93.3843% ( 55) 00:13:37.498 18201.265 - 18315.738: 93.9051% ( 47) 00:13:37.498 18315.738 - 18430.211: 94.5811% ( 61) 00:13:37.498 18430.211 - 18544.685: 95.1020% ( 47) 00:13:37.498 18544.685 - 18659.158: 95.6006% ( 45) 00:13:37.498 18659.158 - 18773.631: 95.9441% ( 31) 00:13:37.498 18773.631 - 18888.105: 96.2434% ( 27) 00:13:37.498 18888.105 - 19002.578: 96.5204% ( 25) 00:13:37.498 19002.578 - 19117.052: 96.7642% ( 22) 00:13:37.498 19117.052 - 19231.525: 97.0080% ( 22) 00:13:37.498 19231.525 - 19345.998: 97.2074% ( 18) 00:13:37.498 19345.998 - 19460.472: 97.4734% ( 24) 00:13:37.498 19460.472 - 19574.945: 97.6729% ( 18) 00:13:37.498 19574.945 - 19689.418: 97.7837% ( 10) 00:13:37.498 19689.418 - 19803.892: 97.8280% ( 4) 00:13:37.498 19803.892 - 19918.365: 97.8723% ( 4) 00:13:37.498 20605.205 - 20719.679: 97.8945% ( 2) 00:13:37.498 20719.679 - 20834.152: 98.0164% ( 11) 00:13:37.498 20834.152 - 20948.625: 98.1051% ( 8) 00:13:37.498 20948.625 - 21063.099: 98.1494% ( 4) 00:13:37.498 21063.099 - 21177.572: 98.1937% ( 4) 00:13:37.498 21177.572 - 21292.045: 98.2270% ( 3) 00:13:37.498 21292.045 - 21406.519: 98.2491% ( 2) 00:13:37.498 21406.519 - 21520.992: 98.2602% ( 1) 00:13:37.498 21520.992 - 21635.466: 98.3156% ( 5) 00:13:37.498 21635.466 - 21749.939: 98.3488% ( 3) 00:13:37.498 21749.939 - 21864.412: 98.3932% ( 4) 00:13:37.498 21864.412 - 21978.886: 98.4375% ( 4) 00:13:37.498 21978.886 - 22093.359: 98.4818% ( 4) 00:13:37.498 22093.359 - 22207.832: 98.5151% ( 3) 00:13:37.498 22207.832 - 22322.306: 98.5483% ( 3) 00:13:37.498 22322.306 - 22436.779: 98.5816% ( 3) 00:13:37.498 30678.861 - 30907.808: 98.6591% ( 7) 00:13:37.498 30907.808 - 31136.755: 98.8586% ( 18) 00:13:37.499 31136.755 - 31365.701: 98.9140% ( 5) 00:13:37.499 31365.701 - 31594.648: 98.9805% ( 6) 00:13:37.499 31594.648 - 31823.595: 99.0470% ( 6) 00:13:37.499 31823.595 - 32052.541: 99.1135% ( 6) 00:13:37.499 32052.541 - 32281.488: 99.2021% ( 8) 00:13:37.499 32281.488 - 32510.435: 99.2797% ( 7) 00:13:37.499 32510.435 - 32739.382: 99.2908% ( 1) 00:13:37.499 43728.824 - 43957.771: 99.3019% ( 1) 00:13:37.499 43957.771 - 44186.718: 99.3573% ( 5) 00:13:37.499 44186.718 - 44415.665: 99.4016% ( 4) 00:13:37.499 44415.665 - 44644.611: 99.4459% ( 4) 00:13:37.499 44644.611 - 44873.558: 99.4902% ( 4) 00:13:37.499 44873.558 - 45102.505: 99.5567% ( 6) 00:13:37.499 45102.505 - 45331.452: 99.5789% ( 2) 00:13:37.499 45331.452 - 45560.398: 99.6343% ( 5) 00:13:37.499 45560.398 - 45789.345: 99.6786% ( 4) 00:13:37.499 45789.345 - 46018.292: 99.7340% ( 5) 00:13:37.499 46018.292 - 46247.238: 99.7895% ( 5) 00:13:37.499 46247.238 - 46476.185: 99.8449% ( 5) 00:13:37.499 46476.185 - 46705.132: 99.8781% ( 3) 00:13:37.499 46705.132 - 46934.079: 99.9335% ( 5) 00:13:37.499 46934.079 - 47163.025: 99.9889% ( 5) 00:13:37.499 47163.025 - 47391.972: 100.0000% ( 1) 00:13:37.499 00:13:37.499 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:37.499 ============================================================================== 00:13:37.499 Range in us Cumulative IO count 00:13:37.499 9100.632 - 9157.869: 0.0222% ( 2) 00:13:37.499 9157.869 - 9215.106: 0.0554% ( 3) 00:13:37.499 9215.106 - 9272.342: 0.1108% ( 5) 00:13:37.499 9272.342 - 9329.579: 0.1884% ( 7) 00:13:37.499 9329.579 - 9386.816: 0.3435% ( 14) 00:13:37.499 9386.816 - 9444.052: 0.5208% ( 16) 00:13:37.499 9444.052 - 9501.289: 0.8976% ( 34) 00:13:37.499 9501.289 - 9558.526: 1.0971% ( 18) 00:13:37.499 9558.526 - 9615.762: 1.2301% ( 12) 00:13:37.499 9615.762 - 9672.999: 1.3630% ( 12) 00:13:37.499 9672.999 - 9730.236: 1.5403% ( 16) 00:13:37.499 9730.236 - 9787.472: 1.7730% ( 21) 00:13:37.499 9787.472 - 9844.709: 2.0612% ( 26) 00:13:37.499 9844.709 - 9901.946: 2.5709% ( 46) 00:13:37.499 9901.946 - 9959.183: 2.9255% ( 32) 00:13:37.499 9959.183 - 10016.419: 3.4242% ( 45) 00:13:37.499 10016.419 - 10073.656: 3.8342% ( 37) 00:13:37.499 10073.656 - 10130.893: 4.3107% ( 43) 00:13:37.499 10130.893 - 10188.129: 4.9202% ( 55) 00:13:37.499 10188.129 - 10245.366: 5.7292% ( 73) 00:13:37.499 10245.366 - 10302.603: 6.3719% ( 58) 00:13:37.499 10302.603 - 10359.839: 7.0811% ( 64) 00:13:37.499 10359.839 - 10417.076: 7.5355% ( 41) 00:13:37.499 10417.076 - 10474.313: 7.9122% ( 34) 00:13:37.499 10474.313 - 10531.549: 8.4552% ( 49) 00:13:37.499 10531.549 - 10588.786: 8.8652% ( 37) 00:13:37.499 10588.786 - 10646.023: 9.2753% ( 37) 00:13:37.499 10646.023 - 10703.259: 9.7185% ( 40) 00:13:37.499 10703.259 - 10760.496: 10.3059% ( 53) 00:13:37.499 10760.496 - 10817.733: 10.9929% ( 62) 00:13:37.499 10817.733 - 10874.969: 11.4473% ( 41) 00:13:37.499 10874.969 - 10932.206: 12.1121% ( 60) 00:13:37.499 10932.206 - 10989.443: 12.5887% ( 43) 00:13:37.499 10989.443 - 11046.679: 12.8879% ( 27) 00:13:37.499 11046.679 - 11103.916: 13.1095% ( 20) 00:13:37.499 11103.916 - 11161.153: 13.3533% ( 22) 00:13:37.499 11161.153 - 11218.390: 13.7190% ( 33) 00:13:37.499 11218.390 - 11275.626: 13.9517% ( 21) 00:13:37.499 11275.626 - 11332.863: 14.0403% ( 8) 00:13:37.499 11332.863 - 11390.100: 14.1290% ( 8) 00:13:37.499 11390.100 - 11447.336: 14.2287% ( 9) 00:13:37.499 11447.336 - 11504.573: 14.4171% ( 17) 00:13:37.499 11504.573 - 11561.810: 14.6609% ( 22) 00:13:37.499 11561.810 - 11619.046: 15.2482% ( 53) 00:13:37.499 11619.046 - 11676.283: 15.7801% ( 48) 00:13:37.499 11676.283 - 11733.520: 16.2012% ( 38) 00:13:37.499 11733.520 - 11790.756: 16.6334% ( 39) 00:13:37.499 11790.756 - 11847.993: 17.0767% ( 40) 00:13:37.499 11847.993 - 11905.230: 17.5532% ( 43) 00:13:37.499 11905.230 - 11962.466: 18.1627% ( 55) 00:13:37.499 11962.466 - 12019.703: 18.8165% ( 59) 00:13:37.499 12019.703 - 12076.940: 19.5811% ( 69) 00:13:37.499 12076.940 - 12134.176: 20.6117% ( 93) 00:13:37.499 12134.176 - 12191.413: 21.6423% ( 93) 00:13:37.499 12191.413 - 12248.650: 22.9056% ( 114) 00:13:37.499 12248.650 - 12305.886: 24.0027% ( 99) 00:13:37.499 12305.886 - 12363.123: 25.2549% ( 113) 00:13:37.499 12363.123 - 12420.360: 26.5182% ( 114) 00:13:37.499 12420.360 - 12477.597: 27.9255% ( 127) 00:13:37.499 12477.597 - 12534.833: 29.6653% ( 157) 00:13:37.499 12534.833 - 12592.070: 31.3387% ( 151) 00:13:37.499 12592.070 - 12649.307: 32.9455% ( 145) 00:13:37.499 12649.307 - 12706.543: 34.5745% ( 147) 00:13:37.499 12706.543 - 12763.780: 35.8267% ( 113) 00:13:37.499 12763.780 - 12821.017: 37.0013% ( 106) 00:13:37.499 12821.017 - 12878.253: 38.0873% ( 98) 00:13:37.499 12878.253 - 12935.490: 39.3174% ( 111) 00:13:37.499 12935.490 - 12992.727: 40.5253% ( 109) 00:13:37.499 12992.727 - 13049.963: 41.9215% ( 126) 00:13:37.499 13049.963 - 13107.200: 43.0075% ( 98) 00:13:37.499 13107.200 - 13164.437: 43.9827% ( 88) 00:13:37.499 13164.437 - 13221.673: 44.9579% ( 88) 00:13:37.499 13221.673 - 13278.910: 46.1215% ( 105) 00:13:37.499 13278.910 - 13336.147: 47.4180% ( 117) 00:13:37.499 13336.147 - 13393.383: 48.6591% ( 112) 00:13:37.499 13393.383 - 13450.620: 49.7451% ( 98) 00:13:37.499 13450.620 - 13507.857: 50.7868% ( 94) 00:13:37.499 13507.857 - 13565.093: 51.7066% ( 83) 00:13:37.499 13565.093 - 13622.330: 52.6596% ( 86) 00:13:37.499 13622.330 - 13679.567: 53.4464% ( 71) 00:13:37.499 13679.567 - 13736.803: 54.1002% ( 59) 00:13:37.499 13736.803 - 13794.040: 54.5213% ( 38) 00:13:37.499 13794.040 - 13851.277: 55.0975% ( 52) 00:13:37.499 13851.277 - 13908.514: 55.8067% ( 64) 00:13:37.499 13908.514 - 13965.750: 56.5824% ( 70) 00:13:37.499 13965.750 - 14022.987: 57.3138% ( 66) 00:13:37.499 14022.987 - 14080.224: 57.9122% ( 54) 00:13:37.499 14080.224 - 14137.460: 58.6879% ( 70) 00:13:37.499 14137.460 - 14194.697: 59.3085% ( 56) 00:13:37.499 14194.697 - 14251.934: 59.9069% ( 54) 00:13:37.499 14251.934 - 14309.170: 60.6494% ( 67) 00:13:37.499 14309.170 - 14366.407: 61.6578% ( 91) 00:13:37.499 14366.407 - 14423.644: 62.3227% ( 60) 00:13:37.499 14423.644 - 14480.880: 62.8879% ( 51) 00:13:37.499 14480.880 - 14538.117: 63.4863% ( 54) 00:13:37.499 14538.117 - 14595.354: 64.1179% ( 57) 00:13:37.499 14595.354 - 14652.590: 64.5612% ( 40) 00:13:37.499 14652.590 - 14767.064: 65.4366% ( 79) 00:13:37.499 14767.064 - 14881.537: 66.2234% ( 71) 00:13:37.499 14881.537 - 14996.010: 66.8107% ( 53) 00:13:37.499 14996.010 - 15110.484: 67.4091% ( 54) 00:13:37.499 15110.484 - 15224.957: 68.0297% ( 56) 00:13:37.499 15224.957 - 15339.431: 68.9051% ( 79) 00:13:37.499 15339.431 - 15453.904: 70.1463% ( 112) 00:13:37.499 15453.904 - 15568.377: 71.4650% ( 119) 00:13:37.499 15568.377 - 15682.851: 73.0164% ( 140) 00:13:37.499 15682.851 - 15797.324: 74.0802% ( 96) 00:13:37.499 15797.324 - 15911.797: 75.0554% ( 88) 00:13:37.499 15911.797 - 16026.271: 76.4960% ( 130) 00:13:37.499 16026.271 - 16140.744: 77.7482% ( 113) 00:13:37.499 16140.744 - 16255.217: 79.5324% ( 161) 00:13:37.499 16255.217 - 16369.691: 80.7846% ( 113) 00:13:37.499 16369.691 - 16484.164: 81.8816% ( 99) 00:13:37.499 16484.164 - 16598.638: 82.7571% ( 79) 00:13:37.499 16598.638 - 16713.111: 83.6658% ( 82) 00:13:37.499 16713.111 - 16827.584: 84.3528% ( 62) 00:13:37.499 16827.584 - 16942.058: 84.8626% ( 46) 00:13:37.499 16942.058 - 17056.531: 85.5053% ( 58) 00:13:37.499 17056.531 - 17171.004: 86.2035% ( 63) 00:13:37.499 17171.004 - 17285.478: 87.1676% ( 87) 00:13:37.499 17285.478 - 17399.951: 88.2979% ( 102) 00:13:37.499 17399.951 - 17514.424: 89.1733% ( 79) 00:13:37.499 17514.424 - 17628.898: 89.7606% ( 53) 00:13:37.499 17628.898 - 17743.371: 90.3480% ( 53) 00:13:37.499 17743.371 - 17857.845: 90.9685% ( 56) 00:13:37.499 17857.845 - 17972.318: 91.5669% ( 54) 00:13:37.499 17972.318 - 18086.791: 92.2318% ( 60) 00:13:37.499 18086.791 - 18201.265: 92.9632% ( 66) 00:13:37.499 18201.265 - 18315.738: 93.9162% ( 86) 00:13:37.499 18315.738 - 18430.211: 94.2487% ( 30) 00:13:37.499 18430.211 - 18544.685: 94.6033% ( 32) 00:13:37.499 18544.685 - 18659.158: 95.1241% ( 47) 00:13:37.499 18659.158 - 18773.631: 95.6782% ( 50) 00:13:37.499 18773.631 - 18888.105: 96.0660% ( 35) 00:13:37.499 18888.105 - 19002.578: 96.5758% ( 46) 00:13:37.499 19002.578 - 19117.052: 97.1077% ( 48) 00:13:37.499 19117.052 - 19231.525: 98.0718% ( 87) 00:13:37.499 19231.525 - 19345.998: 98.3932% ( 29) 00:13:37.499 19345.998 - 19460.472: 98.4929% ( 9) 00:13:37.499 19460.472 - 19574.945: 98.5151% ( 2) 00:13:37.499 19574.945 - 19689.418: 98.5483% ( 3) 00:13:37.499 19689.418 - 19803.892: 98.5705% ( 2) 00:13:37.499 19803.892 - 19918.365: 98.5816% ( 1) 00:13:37.499 30907.808 - 31136.755: 98.5926% ( 1) 00:13:37.499 31136.755 - 31365.701: 98.6480% ( 5) 00:13:37.499 31365.701 - 31594.648: 98.7145% ( 6) 00:13:37.499 31594.648 - 31823.595: 98.7699% ( 5) 00:13:37.499 31823.595 - 32052.541: 98.8143% ( 4) 00:13:37.499 32052.541 - 32281.488: 98.8586% ( 4) 00:13:37.499 32281.488 - 32510.435: 98.9251% ( 6) 00:13:37.499 32510.435 - 32739.382: 98.9694% ( 4) 00:13:37.499 32739.382 - 32968.328: 99.0248% ( 5) 00:13:37.499 32968.328 - 33197.275: 99.0802% ( 5) 00:13:37.499 33197.275 - 33426.222: 99.1356% ( 5) 00:13:37.499 33426.222 - 33655.169: 99.1910% ( 5) 00:13:37.499 33655.169 - 33884.115: 99.2354% ( 4) 00:13:37.499 33884.115 - 34113.062: 99.2908% ( 5) 00:13:37.499 43041.984 - 43270.931: 99.3129% ( 2) 00:13:37.499 43270.931 - 43499.878: 99.3684% ( 5) 00:13:37.499 43499.878 - 43728.824: 99.4348% ( 6) 00:13:37.499 43728.824 - 43957.771: 99.4792% ( 4) 00:13:37.499 43957.771 - 44186.718: 99.5567% ( 7) 00:13:37.500 44186.718 - 44415.665: 99.6232% ( 6) 00:13:37.500 44415.665 - 44644.611: 99.6897% ( 6) 00:13:37.500 44644.611 - 44873.558: 99.7451% ( 5) 00:13:37.500 44873.558 - 45102.505: 99.8116% ( 6) 00:13:37.500 45102.505 - 45331.452: 99.8781% ( 6) 00:13:37.500 45331.452 - 45560.398: 99.9557% ( 7) 00:13:37.500 45560.398 - 45789.345: 100.0000% ( 4) 00:13:37.500 00:13:37.500 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:37.500 ============================================================================== 00:13:37.500 Range in us Cumulative IO count 00:13:37.500 9386.816 - 9444.052: 0.0776% ( 7) 00:13:37.500 9444.052 - 9501.289: 0.1551% ( 7) 00:13:37.500 9501.289 - 9558.526: 0.2438% ( 8) 00:13:37.500 9558.526 - 9615.762: 0.3324% ( 8) 00:13:37.500 9615.762 - 9672.999: 0.5208% ( 17) 00:13:37.500 9672.999 - 9730.236: 0.6760% ( 14) 00:13:37.500 9730.236 - 9787.472: 0.9198% ( 22) 00:13:37.500 9787.472 - 9844.709: 1.1746% ( 23) 00:13:37.500 9844.709 - 9901.946: 1.5403% ( 33) 00:13:37.500 9901.946 - 9959.183: 1.9060% ( 33) 00:13:37.500 9959.183 - 10016.419: 2.5266% ( 56) 00:13:37.500 10016.419 - 10073.656: 2.9366% ( 37) 00:13:37.500 10073.656 - 10130.893: 3.5018% ( 51) 00:13:37.500 10130.893 - 10188.129: 4.1445% ( 58) 00:13:37.500 10188.129 - 10245.366: 4.7762% ( 57) 00:13:37.500 10245.366 - 10302.603: 5.4743% ( 63) 00:13:37.500 10302.603 - 10359.839: 6.3719% ( 81) 00:13:37.500 10359.839 - 10417.076: 7.1144% ( 67) 00:13:37.500 10417.076 - 10474.313: 7.6574% ( 49) 00:13:37.500 10474.313 - 10531.549: 8.2558% ( 54) 00:13:37.500 10531.549 - 10588.786: 8.9650% ( 64) 00:13:37.500 10588.786 - 10646.023: 9.4193% ( 41) 00:13:37.500 10646.023 - 10703.259: 10.0399% ( 56) 00:13:37.500 10703.259 - 10760.496: 10.8045% ( 69) 00:13:37.500 10760.496 - 10817.733: 11.4473% ( 58) 00:13:37.500 10817.733 - 10874.969: 11.8019% ( 32) 00:13:37.500 10874.969 - 10932.206: 12.2119% ( 37) 00:13:37.500 10932.206 - 10989.443: 12.4889% ( 25) 00:13:37.500 10989.443 - 11046.679: 12.7660% ( 25) 00:13:37.500 11046.679 - 11103.916: 12.9654% ( 18) 00:13:37.500 11103.916 - 11161.153: 13.1427% ( 16) 00:13:37.500 11161.153 - 11218.390: 13.3200% ( 16) 00:13:37.500 11218.390 - 11275.626: 13.6414% ( 29) 00:13:37.500 11275.626 - 11332.863: 13.9184% ( 25) 00:13:37.500 11332.863 - 11390.100: 14.1512% ( 21) 00:13:37.500 11390.100 - 11447.336: 14.3174% ( 15) 00:13:37.500 11447.336 - 11504.573: 14.4947% ( 16) 00:13:37.500 11504.573 - 11561.810: 14.7939% ( 27) 00:13:37.500 11561.810 - 11619.046: 15.2150% ( 38) 00:13:37.500 11619.046 - 11676.283: 15.6028% ( 35) 00:13:37.500 11676.283 - 11733.520: 15.8799% ( 25) 00:13:37.500 11733.520 - 11790.756: 16.1015% ( 20) 00:13:37.500 11790.756 - 11847.993: 16.2899% ( 17) 00:13:37.500 11847.993 - 11905.230: 16.6445% ( 32) 00:13:37.500 11905.230 - 11962.466: 16.9326% ( 26) 00:13:37.500 11962.466 - 12019.703: 17.4313% ( 45) 00:13:37.500 12019.703 - 12076.940: 18.1294% ( 63) 00:13:37.500 12076.940 - 12134.176: 18.7389% ( 55) 00:13:37.500 12134.176 - 12191.413: 19.7030% ( 87) 00:13:37.500 12191.413 - 12248.650: 20.5674% ( 78) 00:13:37.500 12248.650 - 12305.886: 21.7863% ( 110) 00:13:37.500 12305.886 - 12363.123: 23.0275% ( 112) 00:13:37.500 12363.123 - 12420.360: 24.4570% ( 129) 00:13:37.500 12420.360 - 12477.597: 26.1192% ( 150) 00:13:37.500 12477.597 - 12534.833: 27.7261% ( 145) 00:13:37.500 12534.833 - 12592.070: 29.1777% ( 131) 00:13:37.500 12592.070 - 12649.307: 30.8400% ( 150) 00:13:37.500 12649.307 - 12706.543: 32.4357% ( 144) 00:13:37.500 12706.543 - 12763.780: 34.0980% ( 150) 00:13:37.500 12763.780 - 12821.017: 35.6937% ( 144) 00:13:37.500 12821.017 - 12878.253: 37.3781% ( 152) 00:13:37.500 12878.253 - 12935.490: 39.0182% ( 148) 00:13:37.500 12935.490 - 12992.727: 40.7026% ( 152) 00:13:37.500 12992.727 - 13049.963: 42.0102% ( 118) 00:13:37.500 13049.963 - 13107.200: 43.3289% ( 119) 00:13:37.500 13107.200 - 13164.437: 44.3152% ( 89) 00:13:37.500 13164.437 - 13221.673: 45.3790% ( 96) 00:13:37.500 13221.673 - 13278.910: 46.3874% ( 91) 00:13:37.500 13278.910 - 13336.147: 47.4402% ( 95) 00:13:37.500 13336.147 - 13393.383: 48.3045% ( 78) 00:13:37.500 13393.383 - 13450.620: 48.9583% ( 59) 00:13:37.500 13450.620 - 13507.857: 49.6343% ( 61) 00:13:37.500 13507.857 - 13565.093: 50.5208% ( 80) 00:13:37.500 13565.093 - 13622.330: 51.1414% ( 56) 00:13:37.500 13622.330 - 13679.567: 52.0058% ( 78) 00:13:37.500 13679.567 - 13736.803: 52.8701% ( 78) 00:13:37.500 13736.803 - 13794.040: 53.8342% ( 87) 00:13:37.500 13794.040 - 13851.277: 54.7540% ( 83) 00:13:37.500 13851.277 - 13908.514: 55.4965% ( 67) 00:13:37.500 13908.514 - 13965.750: 56.2278% ( 66) 00:13:37.500 13965.750 - 14022.987: 57.0035% ( 70) 00:13:37.500 14022.987 - 14080.224: 57.7349% ( 66) 00:13:37.500 14080.224 - 14137.460: 58.5882% ( 77) 00:13:37.500 14137.460 - 14194.697: 59.4858% ( 81) 00:13:37.500 14194.697 - 14251.934: 60.2394% ( 68) 00:13:37.500 14251.934 - 14309.170: 60.9375% ( 63) 00:13:37.500 14309.170 - 14366.407: 61.5802% ( 58) 00:13:37.500 14366.407 - 14423.644: 62.1676% ( 53) 00:13:37.500 14423.644 - 14480.880: 62.6884% ( 47) 00:13:37.500 14480.880 - 14538.117: 63.1095% ( 38) 00:13:37.500 14538.117 - 14595.354: 63.5638% ( 41) 00:13:37.500 14595.354 - 14652.590: 63.9738% ( 37) 00:13:37.500 14652.590 - 14767.064: 64.8604% ( 80) 00:13:37.500 14767.064 - 14881.537: 65.8355% ( 88) 00:13:37.500 14881.537 - 14996.010: 66.7664% ( 84) 00:13:37.500 14996.010 - 15110.484: 67.6862% ( 83) 00:13:37.500 15110.484 - 15224.957: 68.6946% ( 91) 00:13:37.500 15224.957 - 15339.431: 69.8249% ( 102) 00:13:37.500 15339.431 - 15453.904: 70.9996% ( 106) 00:13:37.500 15453.904 - 15568.377: 72.2296% ( 111) 00:13:37.500 15568.377 - 15682.851: 73.2270% ( 90) 00:13:37.500 15682.851 - 15797.324: 74.2797% ( 95) 00:13:37.500 15797.324 - 15911.797: 75.2881% ( 91) 00:13:37.500 15911.797 - 16026.271: 76.6733% ( 125) 00:13:37.500 16026.271 - 16140.744: 77.7593% ( 98) 00:13:37.500 16140.744 - 16255.217: 78.9450% ( 107) 00:13:37.500 16255.217 - 16369.691: 80.1308% ( 107) 00:13:37.500 16369.691 - 16484.164: 81.2278% ( 99) 00:13:37.500 16484.164 - 16598.638: 82.1254% ( 81) 00:13:37.500 16598.638 - 16713.111: 82.9898% ( 78) 00:13:37.500 16713.111 - 16827.584: 84.1090% ( 101) 00:13:37.500 16827.584 - 16942.058: 85.5718% ( 132) 00:13:37.500 16942.058 - 17056.531: 87.1454% ( 142) 00:13:37.500 17056.531 - 17171.004: 88.6525% ( 136) 00:13:37.500 17171.004 - 17285.478: 90.2482% ( 144) 00:13:37.500 17285.478 - 17399.951: 90.9353% ( 62) 00:13:37.500 17399.951 - 17514.424: 91.4229% ( 44) 00:13:37.500 17514.424 - 17628.898: 91.7442% ( 29) 00:13:37.500 17628.898 - 17743.371: 92.0545% ( 28) 00:13:37.500 17743.371 - 17857.845: 92.8302% ( 70) 00:13:37.500 17857.845 - 17972.318: 93.2181% ( 35) 00:13:37.500 17972.318 - 18086.791: 93.7611% ( 49) 00:13:37.500 18086.791 - 18201.265: 94.4592% ( 63) 00:13:37.500 18201.265 - 18315.738: 95.3347% ( 79) 00:13:37.500 18315.738 - 18430.211: 96.0217% ( 62) 00:13:37.500 18430.211 - 18544.685: 96.5536% ( 48) 00:13:37.500 18544.685 - 18659.158: 96.8639% ( 28) 00:13:37.500 18659.158 - 18773.631: 97.1520% ( 26) 00:13:37.500 18773.631 - 18888.105: 97.4291% ( 25) 00:13:37.500 18888.105 - 19002.578: 97.6396% ( 19) 00:13:37.500 19002.578 - 19117.052: 97.7837% ( 13) 00:13:37.500 19117.052 - 19231.525: 97.9056% ( 11) 00:13:37.500 19231.525 - 19345.998: 98.0940% ( 17) 00:13:37.500 19345.998 - 19460.472: 98.4264% ( 30) 00:13:37.500 19460.472 - 19574.945: 98.5372% ( 10) 00:13:37.500 19574.945 - 19689.418: 98.5816% ( 4) 00:13:37.500 30449.914 - 30678.861: 98.5926% ( 1) 00:13:37.500 30678.861 - 30907.808: 98.6702% ( 7) 00:13:37.500 30907.808 - 31136.755: 98.7367% ( 6) 00:13:37.500 31136.755 - 31365.701: 98.7699% ( 3) 00:13:37.500 31365.701 - 31594.648: 98.8364% ( 6) 00:13:37.500 31594.648 - 31823.595: 98.8918% ( 5) 00:13:37.500 31823.595 - 32052.541: 98.9473% ( 5) 00:13:37.500 32052.541 - 32281.488: 99.0027% ( 5) 00:13:37.500 32281.488 - 32510.435: 99.0691% ( 6) 00:13:37.500 32510.435 - 32739.382: 99.1246% ( 5) 00:13:37.500 32739.382 - 32968.328: 99.1800% ( 5) 00:13:37.500 32968.328 - 33197.275: 99.2243% ( 4) 00:13:37.500 33197.275 - 33426.222: 99.2797% ( 5) 00:13:37.500 33426.222 - 33655.169: 99.2908% ( 1) 00:13:37.500 42355.144 - 42584.091: 99.3351% ( 4) 00:13:37.500 42584.091 - 42813.038: 99.4016% ( 6) 00:13:37.500 42813.038 - 43041.984: 99.4681% ( 6) 00:13:37.500 43041.984 - 43270.931: 99.5346% ( 6) 00:13:37.500 43270.931 - 43499.878: 99.6011% ( 6) 00:13:37.500 43499.878 - 43728.824: 99.6676% ( 6) 00:13:37.500 43728.824 - 43957.771: 99.7451% ( 7) 00:13:37.500 43957.771 - 44186.718: 99.8116% ( 6) 00:13:37.500 44186.718 - 44415.665: 99.8670% ( 5) 00:13:37.500 44415.665 - 44644.611: 99.9335% ( 6) 00:13:37.500 44644.611 - 44873.558: 100.0000% ( 6) 00:13:37.500 00:13:37.500 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:37.500 ============================================================================== 00:13:37.500 Range in us Cumulative IO count 00:13:37.500 9215.106 - 9272.342: 0.0111% ( 1) 00:13:37.500 9272.342 - 9329.579: 0.0443% ( 3) 00:13:37.500 9329.579 - 9386.816: 0.0887% ( 4) 00:13:37.500 9386.816 - 9444.052: 0.1884% ( 9) 00:13:37.500 9444.052 - 9501.289: 0.2770% ( 8) 00:13:37.500 9501.289 - 9558.526: 0.4654% ( 17) 00:13:37.500 9558.526 - 9615.762: 0.6427% ( 16) 00:13:37.501 9615.762 - 9672.999: 0.8976% ( 23) 00:13:37.501 9672.999 - 9730.236: 1.1636% ( 24) 00:13:37.501 9730.236 - 9787.472: 1.5182% ( 32) 00:13:37.501 9787.472 - 9844.709: 1.8395% ( 29) 00:13:37.501 9844.709 - 9901.946: 2.1166% ( 25) 00:13:37.501 9901.946 - 9959.183: 2.4490% ( 30) 00:13:37.501 9959.183 - 10016.419: 2.9699% ( 47) 00:13:37.501 10016.419 - 10073.656: 3.5239% ( 50) 00:13:37.501 10073.656 - 10130.893: 4.2442% ( 65) 00:13:37.501 10130.893 - 10188.129: 5.1640% ( 83) 00:13:37.501 10188.129 - 10245.366: 5.8732% ( 64) 00:13:37.501 10245.366 - 10302.603: 6.5381% ( 60) 00:13:37.501 10302.603 - 10359.839: 6.9592% ( 38) 00:13:37.501 10359.839 - 10417.076: 7.5133% ( 50) 00:13:37.501 10417.076 - 10474.313: 8.3444% ( 75) 00:13:37.501 10474.313 - 10531.549: 9.0093% ( 60) 00:13:37.501 10531.549 - 10588.786: 9.5634% ( 50) 00:13:37.501 10588.786 - 10646.023: 9.9623% ( 36) 00:13:37.501 10646.023 - 10703.259: 10.4942% ( 48) 00:13:37.501 10703.259 - 10760.496: 10.8932% ( 36) 00:13:37.501 10760.496 - 10817.733: 11.3586% ( 42) 00:13:37.501 10817.733 - 10874.969: 11.6467% ( 26) 00:13:37.501 10874.969 - 10932.206: 12.0457% ( 36) 00:13:37.501 10932.206 - 10989.443: 12.4224% ( 34) 00:13:37.501 10989.443 - 11046.679: 12.6884% ( 24) 00:13:37.501 11046.679 - 11103.916: 12.9322% ( 22) 00:13:37.501 11103.916 - 11161.153: 13.1206% ( 17) 00:13:37.501 11161.153 - 11218.390: 13.3311% ( 19) 00:13:37.501 11218.390 - 11275.626: 13.5527% ( 20) 00:13:37.501 11275.626 - 11332.863: 13.8409% ( 26) 00:13:37.501 11332.863 - 11390.100: 14.2176% ( 34) 00:13:37.501 11390.100 - 11447.336: 14.6166% ( 36) 00:13:37.501 11447.336 - 11504.573: 14.9934% ( 34) 00:13:37.501 11504.573 - 11561.810: 15.4145% ( 38) 00:13:37.501 11561.810 - 11619.046: 15.8023% ( 35) 00:13:37.501 11619.046 - 11676.283: 16.0350% ( 21) 00:13:37.501 11676.283 - 11733.520: 16.2788% ( 22) 00:13:37.501 11733.520 - 11790.756: 16.6334% ( 32) 00:13:37.501 11790.756 - 11847.993: 17.0434% ( 37) 00:13:37.501 11847.993 - 11905.230: 17.3870% ( 31) 00:13:37.501 11905.230 - 11962.466: 17.9743% ( 53) 00:13:37.501 11962.466 - 12019.703: 18.5395% ( 51) 00:13:37.501 12019.703 - 12076.940: 19.1822% ( 58) 00:13:37.501 12076.940 - 12134.176: 20.0909% ( 82) 00:13:37.501 12134.176 - 12191.413: 21.1769% ( 98) 00:13:37.501 12191.413 - 12248.650: 22.1188% ( 85) 00:13:37.501 12248.650 - 12305.886: 23.2048% ( 98) 00:13:37.501 12305.886 - 12363.123: 24.5124% ( 118) 00:13:37.501 12363.123 - 12420.360: 25.9087% ( 126) 00:13:37.501 12420.360 - 12477.597: 27.2163% ( 118) 00:13:37.501 12477.597 - 12534.833: 28.5018% ( 116) 00:13:37.501 12534.833 - 12592.070: 29.7983% ( 117) 00:13:37.501 12592.070 - 12649.307: 31.1503% ( 122) 00:13:37.501 12649.307 - 12706.543: 32.5576% ( 127) 00:13:37.501 12706.543 - 12763.780: 33.8985% ( 121) 00:13:37.501 12763.780 - 12821.017: 35.3723% ( 133) 00:13:37.501 12821.017 - 12878.253: 37.0346% ( 150) 00:13:37.501 12878.253 - 12935.490: 38.8187% ( 161) 00:13:37.501 12935.490 - 12992.727: 40.6250% ( 163) 00:13:37.501 12992.727 - 13049.963: 42.2540% ( 147) 00:13:37.501 13049.963 - 13107.200: 43.7611% ( 136) 00:13:37.501 13107.200 - 13164.437: 45.0133% ( 113) 00:13:37.501 13164.437 - 13221.673: 45.8112% ( 72) 00:13:37.501 13221.673 - 13278.910: 46.5869% ( 70) 00:13:37.501 13278.910 - 13336.147: 47.3404% ( 68) 00:13:37.501 13336.147 - 13393.383: 48.1161% ( 70) 00:13:37.501 13393.383 - 13450.620: 48.9029% ( 71) 00:13:37.501 13450.620 - 13507.857: 49.9224% ( 92) 00:13:37.501 13507.857 - 13565.093: 50.8644% ( 85) 00:13:37.501 13565.093 - 13622.330: 51.7841% ( 83) 00:13:37.501 13622.330 - 13679.567: 52.8147% ( 93) 00:13:37.501 13679.567 - 13736.803: 53.4907% ( 61) 00:13:37.501 13736.803 - 13794.040: 54.0891% ( 54) 00:13:37.501 13794.040 - 13851.277: 54.6875% ( 54) 00:13:37.501 13851.277 - 13908.514: 55.4854% ( 72) 00:13:37.501 13908.514 - 13965.750: 56.5160% ( 93) 00:13:37.501 13965.750 - 14022.987: 57.6795% ( 105) 00:13:37.501 14022.987 - 14080.224: 58.3223% ( 58) 00:13:37.501 14080.224 - 14137.460: 58.8874% ( 51) 00:13:37.501 14137.460 - 14194.697: 59.3972% ( 46) 00:13:37.501 14194.697 - 14251.934: 59.9180% ( 47) 00:13:37.501 14251.934 - 14309.170: 60.3723% ( 41) 00:13:37.501 14309.170 - 14366.407: 60.8156% ( 40) 00:13:37.501 14366.407 - 14423.644: 61.2256% ( 37) 00:13:37.501 14423.644 - 14480.880: 61.7132% ( 44) 00:13:37.501 14480.880 - 14538.117: 62.2340% ( 47) 00:13:37.501 14538.117 - 14595.354: 62.6995% ( 42) 00:13:37.501 14595.354 - 14652.590: 63.1760% ( 43) 00:13:37.501 14652.590 - 14767.064: 64.2066% ( 93) 00:13:37.501 14767.064 - 14881.537: 64.9934% ( 71) 00:13:37.501 14881.537 - 14996.010: 65.6915% ( 63) 00:13:37.501 14996.010 - 15110.484: 66.6667% ( 88) 00:13:37.501 15110.484 - 15224.957: 68.0297% ( 123) 00:13:37.501 15224.957 - 15339.431: 69.8360% ( 163) 00:13:37.501 15339.431 - 15453.904: 71.1658% ( 120) 00:13:37.501 15453.904 - 15568.377: 72.0855% ( 83) 00:13:37.501 15568.377 - 15682.851: 73.2713% ( 107) 00:13:37.501 15682.851 - 15797.324: 74.3684% ( 99) 00:13:37.501 15797.324 - 15911.797: 75.2327% ( 78) 00:13:37.501 15911.797 - 16026.271: 76.0417% ( 73) 00:13:37.501 16026.271 - 16140.744: 76.8506% ( 73) 00:13:37.501 16140.744 - 16255.217: 77.5709% ( 65) 00:13:37.501 16255.217 - 16369.691: 78.7123% ( 103) 00:13:37.501 16369.691 - 16484.164: 79.8537% ( 103) 00:13:37.501 16484.164 - 16598.638: 80.7513% ( 81) 00:13:37.501 16598.638 - 16713.111: 82.6020% ( 167) 00:13:37.501 16713.111 - 16827.584: 84.0536% ( 131) 00:13:37.501 16827.584 - 16942.058: 85.2283% ( 106) 00:13:37.501 16942.058 - 17056.531: 86.2589% ( 93) 00:13:37.501 17056.531 - 17171.004: 87.3338% ( 97) 00:13:37.501 17171.004 - 17285.478: 88.3200% ( 89) 00:13:37.501 17285.478 - 17399.951: 89.4171% ( 99) 00:13:37.501 17399.951 - 17514.424: 90.8134% ( 126) 00:13:37.501 17514.424 - 17628.898: 92.0434% ( 111) 00:13:37.501 17628.898 - 17743.371: 92.9854% ( 85) 00:13:37.501 17743.371 - 17857.845: 93.8276% ( 76) 00:13:37.501 17857.845 - 17972.318: 94.4481% ( 56) 00:13:37.501 17972.318 - 18086.791: 95.0022% ( 50) 00:13:37.501 18086.791 - 18201.265: 95.3457% ( 31) 00:13:37.501 18201.265 - 18315.738: 95.6893% ( 31) 00:13:37.501 18315.738 - 18430.211: 95.9441% ( 23) 00:13:37.501 18430.211 - 18544.685: 96.3542% ( 37) 00:13:37.501 18544.685 - 18659.158: 96.5869% ( 21) 00:13:37.501 18659.158 - 18773.631: 96.8418% ( 23) 00:13:37.501 18773.631 - 18888.105: 96.9637% ( 11) 00:13:37.501 18888.105 - 19002.578: 97.1077% ( 13) 00:13:37.501 19002.578 - 19117.052: 97.2850% ( 16) 00:13:37.501 19117.052 - 19231.525: 97.3515% ( 6) 00:13:37.501 19231.525 - 19345.998: 97.4402% ( 8) 00:13:37.501 19345.998 - 19460.472: 97.6175% ( 16) 00:13:37.501 19460.472 - 19574.945: 97.7615% ( 13) 00:13:37.501 19574.945 - 19689.418: 97.9056% ( 13) 00:13:37.501 19689.418 - 19803.892: 98.0053% ( 9) 00:13:37.501 19803.892 - 19918.365: 98.2380% ( 21) 00:13:37.501 19918.365 - 20032.838: 98.4818% ( 22) 00:13:37.501 20032.838 - 20147.312: 98.5816% ( 9) 00:13:37.501 29763.074 - 29992.021: 98.6370% ( 5) 00:13:37.501 29992.021 - 30220.968: 98.7478% ( 10) 00:13:37.501 30220.968 - 30449.914: 98.8254% ( 7) 00:13:37.501 30449.914 - 30678.861: 98.8697% ( 4) 00:13:37.501 30678.861 - 30907.808: 98.9140% ( 4) 00:13:37.501 30907.808 - 31136.755: 98.9583% ( 4) 00:13:37.501 31136.755 - 31365.701: 99.0137% ( 5) 00:13:37.501 31365.701 - 31594.648: 99.0691% ( 5) 00:13:37.501 31594.648 - 31823.595: 99.1246% ( 5) 00:13:37.501 31823.595 - 32052.541: 99.1800% ( 5) 00:13:37.501 32052.541 - 32281.488: 99.2465% ( 6) 00:13:37.501 32281.488 - 32510.435: 99.2908% ( 4) 00:13:37.501 40752.517 - 40981.464: 99.3019% ( 1) 00:13:37.501 40981.464 - 41210.410: 99.3684% ( 6) 00:13:37.501 41210.410 - 41439.357: 99.4348% ( 6) 00:13:37.501 41439.357 - 41668.304: 99.5013% ( 6) 00:13:37.501 41668.304 - 41897.251: 99.5678% ( 6) 00:13:37.501 41897.251 - 42126.197: 99.6343% ( 6) 00:13:37.501 42126.197 - 42355.144: 99.7119% ( 7) 00:13:37.501 42355.144 - 42584.091: 99.7784% ( 6) 00:13:37.501 42584.091 - 42813.038: 99.8449% ( 6) 00:13:37.501 42813.038 - 43041.984: 99.9113% ( 6) 00:13:37.501 43041.984 - 43270.931: 99.9778% ( 6) 00:13:37.501 43270.931 - 43499.878: 100.0000% ( 2) 00:13:37.501 00:13:37.501 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:37.501 ============================================================================== 00:13:37.501 Range in us Cumulative IO count 00:13:37.501 9272.342 - 9329.579: 0.0111% ( 1) 00:13:37.501 9329.579 - 9386.816: 0.0665% ( 5) 00:13:37.501 9386.816 - 9444.052: 0.1441% ( 7) 00:13:37.501 9444.052 - 9501.289: 0.2216% ( 7) 00:13:37.501 9501.289 - 9558.526: 0.3546% ( 12) 00:13:37.501 9558.526 - 9615.762: 0.5098% ( 14) 00:13:37.501 9615.762 - 9672.999: 0.6095% ( 9) 00:13:37.501 9672.999 - 9730.236: 0.7535% ( 13) 00:13:37.501 9730.236 - 9787.472: 0.8976% ( 13) 00:13:37.501 9787.472 - 9844.709: 1.0084% ( 10) 00:13:37.501 9844.709 - 9901.946: 1.2744% ( 24) 00:13:37.501 9901.946 - 9959.183: 1.5957% ( 29) 00:13:37.501 9959.183 - 10016.419: 1.9060% ( 28) 00:13:37.501 10016.419 - 10073.656: 2.4823% ( 52) 00:13:37.501 10073.656 - 10130.893: 3.3577% ( 79) 00:13:37.501 10130.893 - 10188.129: 3.9229% ( 51) 00:13:37.501 10188.129 - 10245.366: 4.8537% ( 84) 00:13:37.501 10245.366 - 10302.603: 5.9619% ( 100) 00:13:37.501 10302.603 - 10359.839: 6.7265% ( 69) 00:13:37.501 10359.839 - 10417.076: 7.5022% ( 70) 00:13:37.501 10417.076 - 10474.313: 8.4331% ( 84) 00:13:37.501 10474.313 - 10531.549: 9.0647% ( 57) 00:13:37.501 10531.549 - 10588.786: 9.6188% ( 50) 00:13:37.501 10588.786 - 10646.023: 10.1840% ( 51) 00:13:37.502 10646.023 - 10703.259: 10.6937% ( 46) 00:13:37.502 10703.259 - 10760.496: 10.9818% ( 26) 00:13:37.502 10760.496 - 10817.733: 11.3918% ( 37) 00:13:37.502 10817.733 - 10874.969: 11.8129% ( 38) 00:13:37.502 10874.969 - 10932.206: 12.1232% ( 28) 00:13:37.502 10932.206 - 10989.443: 12.5111% ( 35) 00:13:37.502 10989.443 - 11046.679: 12.7992% ( 26) 00:13:37.502 11046.679 - 11103.916: 13.0652% ( 24) 00:13:37.502 11103.916 - 11161.153: 13.4530% ( 35) 00:13:37.502 11161.153 - 11218.390: 13.6746% ( 20) 00:13:37.502 11218.390 - 11275.626: 13.8409% ( 15) 00:13:37.502 11275.626 - 11332.863: 13.9738% ( 12) 00:13:37.502 11332.863 - 11390.100: 14.1401% ( 15) 00:13:37.502 11390.100 - 11447.336: 14.2730% ( 12) 00:13:37.502 11447.336 - 11504.573: 14.4725% ( 18) 00:13:37.502 11504.573 - 11561.810: 14.7717% ( 27) 00:13:37.502 11561.810 - 11619.046: 15.1374% ( 33) 00:13:37.502 11619.046 - 11676.283: 15.4809% ( 31) 00:13:37.502 11676.283 - 11733.520: 15.7580% ( 25) 00:13:37.502 11733.520 - 11790.756: 15.9242% ( 15) 00:13:37.502 11790.756 - 11847.993: 16.1348% ( 19) 00:13:37.502 11847.993 - 11905.230: 16.3675% ( 21) 00:13:37.502 11905.230 - 11962.466: 16.7553% ( 35) 00:13:37.502 11962.466 - 12019.703: 17.4978% ( 67) 00:13:37.502 12019.703 - 12076.940: 18.3621% ( 78) 00:13:37.502 12076.940 - 12134.176: 19.4371% ( 97) 00:13:37.502 12134.176 - 12191.413: 20.5230% ( 98) 00:13:37.502 12191.413 - 12248.650: 21.6977% ( 106) 00:13:37.502 12248.650 - 12305.886: 22.8723% ( 106) 00:13:37.502 12305.886 - 12363.123: 23.8808% ( 91) 00:13:37.502 12363.123 - 12420.360: 25.0776% ( 108) 00:13:37.502 12420.360 - 12477.597: 26.2633% ( 107) 00:13:37.502 12477.597 - 12534.833: 27.3825% ( 101) 00:13:37.502 12534.833 - 12592.070: 28.6791% ( 117) 00:13:37.502 12592.070 - 12649.307: 30.0421% ( 123) 00:13:37.502 12649.307 - 12706.543: 31.6046% ( 141) 00:13:37.502 12706.543 - 12763.780: 33.0120% ( 127) 00:13:37.502 12763.780 - 12821.017: 34.6742% ( 150) 00:13:37.502 12821.017 - 12878.253: 36.1813% ( 136) 00:13:37.502 12878.253 - 12935.490: 38.0098% ( 165) 00:13:37.502 12935.490 - 12992.727: 39.5612% ( 140) 00:13:37.502 12992.727 - 13049.963: 40.9131% ( 122) 00:13:37.502 13049.963 - 13107.200: 42.0324% ( 101) 00:13:37.502 13107.200 - 13164.437: 43.1405% ( 100) 00:13:37.502 13164.437 - 13221.673: 44.0935% ( 86) 00:13:37.502 13221.673 - 13278.910: 45.0687% ( 88) 00:13:37.502 13278.910 - 13336.147: 46.1547% ( 98) 00:13:37.502 13336.147 - 13393.383: 47.1742% ( 92) 00:13:37.502 13393.383 - 13450.620: 48.2491% ( 97) 00:13:37.502 13450.620 - 13507.857: 49.0359% ( 71) 00:13:37.502 13507.857 - 13565.093: 49.9113% ( 79) 00:13:37.502 13565.093 - 13622.330: 50.9198% ( 91) 00:13:37.502 13622.330 - 13679.567: 51.6733% ( 68) 00:13:37.502 13679.567 - 13736.803: 52.4490% ( 70) 00:13:37.502 13736.803 - 13794.040: 53.1250% ( 61) 00:13:37.502 13794.040 - 13851.277: 53.9118% ( 71) 00:13:37.502 13851.277 - 13908.514: 54.9424% ( 93) 00:13:37.502 13908.514 - 13965.750: 55.8954% ( 86) 00:13:37.502 13965.750 - 14022.987: 56.7598% ( 78) 00:13:37.502 14022.987 - 14080.224: 57.5798% ( 74) 00:13:37.502 14080.224 - 14137.460: 58.5439% ( 87) 00:13:37.502 14137.460 - 14194.697: 59.2420% ( 63) 00:13:37.502 14194.697 - 14251.934: 59.9069% ( 60) 00:13:37.502 14251.934 - 14309.170: 60.7824% ( 79) 00:13:37.502 14309.170 - 14366.407: 61.4916% ( 64) 00:13:37.502 14366.407 - 14423.644: 62.1897% ( 63) 00:13:37.502 14423.644 - 14480.880: 62.7438% ( 50) 00:13:37.502 14480.880 - 14538.117: 63.2868% ( 49) 00:13:37.502 14538.117 - 14595.354: 63.7965% ( 46) 00:13:37.502 14595.354 - 14652.590: 64.2287% ( 39) 00:13:37.502 14652.590 - 14767.064: 65.1374% ( 82) 00:13:37.502 14767.064 - 14881.537: 66.1902% ( 95) 00:13:37.502 14881.537 - 14996.010: 67.1764% ( 89) 00:13:37.502 14996.010 - 15110.484: 68.0629% ( 80) 00:13:37.502 15110.484 - 15224.957: 69.1046% ( 94) 00:13:37.502 15224.957 - 15339.431: 70.2682% ( 105) 00:13:37.502 15339.431 - 15453.904: 71.6977% ( 129) 00:13:37.502 15453.904 - 15568.377: 73.0829% ( 125) 00:13:37.502 15568.377 - 15682.851: 74.4902% ( 127) 00:13:37.502 15682.851 - 15797.324: 75.6095% ( 101) 00:13:37.502 15797.324 - 15911.797: 76.7398% ( 102) 00:13:37.502 15911.797 - 16026.271: 77.2939% ( 50) 00:13:37.502 16026.271 - 16140.744: 77.9145% ( 56) 00:13:37.502 16140.744 - 16255.217: 78.5129% ( 54) 00:13:37.502 16255.217 - 16369.691: 79.3994% ( 80) 00:13:37.502 16369.691 - 16484.164: 80.3967% ( 90) 00:13:37.502 16484.164 - 16598.638: 81.9038% ( 136) 00:13:37.502 16598.638 - 16713.111: 83.5106% ( 145) 00:13:37.502 16713.111 - 16827.584: 84.6410% ( 102) 00:13:37.502 16827.584 - 16942.058: 85.9818% ( 121) 00:13:37.502 16942.058 - 17056.531: 87.5000% ( 137) 00:13:37.502 17056.531 - 17171.004: 88.6303% ( 102) 00:13:37.502 17171.004 - 17285.478: 89.5944% ( 87) 00:13:37.502 17285.478 - 17399.951: 90.4145% ( 74) 00:13:37.502 17399.951 - 17514.424: 91.4118% ( 90) 00:13:37.502 17514.424 - 17628.898: 92.1543% ( 67) 00:13:37.502 17628.898 - 17743.371: 92.7416% ( 53) 00:13:37.502 17743.371 - 17857.845: 93.2181% ( 43) 00:13:37.502 17857.845 - 17972.318: 93.8387% ( 56) 00:13:37.502 17972.318 - 18086.791: 94.2819% ( 40) 00:13:37.502 18086.791 - 18201.265: 94.9690% ( 62) 00:13:37.502 18201.265 - 18315.738: 95.3790% ( 37) 00:13:37.502 18315.738 - 18430.211: 95.8444% ( 42) 00:13:37.502 18430.211 - 18544.685: 96.1990% ( 32) 00:13:37.502 18544.685 - 18659.158: 96.3985% ( 18) 00:13:37.502 18659.158 - 18773.631: 96.5758% ( 16) 00:13:37.502 18773.631 - 18888.105: 96.7642% ( 17) 00:13:37.502 18888.105 - 19002.578: 97.0966% ( 30) 00:13:37.502 19002.578 - 19117.052: 97.4402% ( 31) 00:13:37.502 19117.052 - 19231.525: 97.7283% ( 26) 00:13:37.502 19231.525 - 19345.998: 97.8391% ( 10) 00:13:37.502 19345.998 - 19460.472: 97.9721% ( 12) 00:13:37.502 19460.472 - 19574.945: 98.0940% ( 11) 00:13:37.502 19574.945 - 19689.418: 98.1715% ( 7) 00:13:37.502 19689.418 - 19803.892: 98.2380% ( 6) 00:13:37.502 19803.892 - 19918.365: 98.3378% ( 9) 00:13:37.502 19918.365 - 20032.838: 98.3821% ( 4) 00:13:37.502 20032.838 - 20147.312: 98.4264% ( 4) 00:13:37.502 20147.312 - 20261.785: 98.4707% ( 4) 00:13:37.502 20261.785 - 20376.259: 98.5151% ( 4) 00:13:37.502 20376.259 - 20490.732: 98.5483% ( 3) 00:13:37.502 20490.732 - 20605.205: 98.5816% ( 3) 00:13:37.502 28732.814 - 28847.287: 98.5926% ( 1) 00:13:37.502 28961.761 - 29076.234: 98.6037% ( 1) 00:13:37.502 29190.707 - 29305.181: 98.6259% ( 2) 00:13:37.502 29305.181 - 29534.128: 98.7035% ( 7) 00:13:37.502 29534.128 - 29763.074: 98.8032% ( 9) 00:13:37.502 29763.074 - 29992.021: 98.8697% ( 6) 00:13:37.502 29992.021 - 30220.968: 99.0027% ( 12) 00:13:37.502 30220.968 - 30449.914: 99.0581% ( 5) 00:13:37.502 30449.914 - 30678.861: 99.1024% ( 4) 00:13:37.502 30678.861 - 30907.808: 99.1467% ( 4) 00:13:37.502 30907.808 - 31136.755: 99.2021% ( 5) 00:13:37.502 31136.755 - 31365.701: 99.2465% ( 4) 00:13:37.502 31365.701 - 31594.648: 99.2908% ( 4) 00:13:37.502 38691.997 - 38920.943: 99.3019% ( 1) 00:13:37.502 38920.943 - 39149.890: 99.3240% ( 2) 00:13:37.502 39149.890 - 39378.837: 99.3351% ( 1) 00:13:37.502 39378.837 - 39607.783: 99.3462% ( 1) 00:13:37.502 40065.677 - 40294.624: 99.4127% ( 6) 00:13:37.502 40294.624 - 40523.570: 99.4681% ( 5) 00:13:37.502 40523.570 - 40752.517: 99.5235% ( 5) 00:13:37.502 40752.517 - 40981.464: 99.5900% ( 6) 00:13:37.502 40981.464 - 41210.410: 99.6454% ( 5) 00:13:37.502 41210.410 - 41439.357: 99.7119% ( 6) 00:13:37.502 41439.357 - 41668.304: 99.7895% ( 7) 00:13:37.502 41668.304 - 41897.251: 99.8559% ( 6) 00:13:37.502 41897.251 - 42126.197: 99.9224% ( 6) 00:13:37.502 42126.197 - 42355.144: 99.9889% ( 6) 00:13:37.502 42355.144 - 42584.091: 100.0000% ( 1) 00:13:37.502 00:13:37.502 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:37.502 ============================================================================== 00:13:37.502 Range in us Cumulative IO count 00:13:37.502 9329.579 - 9386.816: 0.0111% ( 1) 00:13:37.502 9386.816 - 9444.052: 0.0222% ( 1) 00:13:37.502 9444.052 - 9501.289: 0.0887% ( 6) 00:13:37.502 9501.289 - 9558.526: 0.1995% ( 10) 00:13:37.502 9558.526 - 9615.762: 0.2660% ( 6) 00:13:37.502 9615.762 - 9672.999: 0.5652% ( 27) 00:13:37.502 9672.999 - 9730.236: 0.7425% ( 16) 00:13:37.502 9730.236 - 9787.472: 0.9419% ( 18) 00:13:37.502 9787.472 - 9844.709: 1.2855% ( 31) 00:13:37.502 9844.709 - 9901.946: 1.8617% ( 52) 00:13:37.502 9901.946 - 9959.183: 2.5155% ( 59) 00:13:37.503 9959.183 - 10016.419: 3.1361% ( 56) 00:13:37.503 10016.419 - 10073.656: 3.6791% ( 49) 00:13:37.503 10073.656 - 10130.893: 4.2775% ( 54) 00:13:37.503 10130.893 - 10188.129: 5.0310% ( 68) 00:13:37.503 10188.129 - 10245.366: 5.6184% ( 53) 00:13:37.503 10245.366 - 10302.603: 6.2389% ( 56) 00:13:37.503 10302.603 - 10359.839: 7.1476% ( 82) 00:13:37.503 10359.839 - 10417.076: 7.9344% ( 71) 00:13:37.503 10417.076 - 10474.313: 8.6547% ( 65) 00:13:37.503 10474.313 - 10531.549: 9.2309% ( 52) 00:13:37.503 10531.549 - 10588.786: 9.8515% ( 56) 00:13:37.503 10588.786 - 10646.023: 10.2726% ( 38) 00:13:37.503 10646.023 - 10703.259: 10.8488% ( 52) 00:13:37.503 10703.259 - 10760.496: 11.4362% ( 53) 00:13:37.503 10760.496 - 10817.733: 11.7354% ( 27) 00:13:37.503 10817.733 - 10874.969: 11.9238% ( 17) 00:13:37.503 10874.969 - 10932.206: 12.1343% ( 19) 00:13:37.503 10932.206 - 10989.443: 12.3449% ( 19) 00:13:37.503 10989.443 - 11046.679: 12.4778% ( 12) 00:13:37.503 11046.679 - 11103.916: 12.5887% ( 10) 00:13:37.503 11103.916 - 11161.153: 12.6662% ( 7) 00:13:37.503 11161.153 - 11218.390: 12.7992% ( 12) 00:13:37.503 11218.390 - 11275.626: 13.0430% ( 22) 00:13:37.503 11275.626 - 11332.863: 13.3644% ( 29) 00:13:37.503 11332.863 - 11390.100: 14.1512% ( 71) 00:13:37.503 11390.100 - 11447.336: 14.8382% ( 62) 00:13:37.503 11447.336 - 11504.573: 15.1152% ( 25) 00:13:37.503 11504.573 - 11561.810: 15.2704% ( 14) 00:13:37.503 11561.810 - 11619.046: 15.4034% ( 12) 00:13:37.503 11619.046 - 11676.283: 15.5142% ( 10) 00:13:37.503 11676.283 - 11733.520: 15.7247% ( 19) 00:13:37.503 11733.520 - 11790.756: 15.9242% ( 18) 00:13:37.503 11790.756 - 11847.993: 16.1126% ( 17) 00:13:37.503 11847.993 - 11905.230: 16.4229% ( 28) 00:13:37.503 11905.230 - 11962.466: 16.7442% ( 29) 00:13:37.503 11962.466 - 12019.703: 17.1543% ( 37) 00:13:37.503 12019.703 - 12076.940: 17.9189% ( 69) 00:13:37.503 12076.940 - 12134.176: 18.9384% ( 92) 00:13:37.503 12134.176 - 12191.413: 19.9136% ( 88) 00:13:37.503 12191.413 - 12248.650: 20.8223% ( 82) 00:13:37.503 12248.650 - 12305.886: 22.0966% ( 115) 00:13:37.503 12305.886 - 12363.123: 23.1937% ( 99) 00:13:37.503 12363.123 - 12420.360: 24.7008% ( 136) 00:13:37.503 12420.360 - 12477.597: 26.6179% ( 173) 00:13:37.503 12477.597 - 12534.833: 28.0363% ( 128) 00:13:37.503 12534.833 - 12592.070: 29.1556% ( 101) 00:13:37.503 12592.070 - 12649.307: 30.3524% ( 108) 00:13:37.503 12649.307 - 12706.543: 31.9149% ( 141) 00:13:37.503 12706.543 - 12763.780: 33.3998% ( 134) 00:13:37.503 12763.780 - 12821.017: 34.6188% ( 110) 00:13:37.503 12821.017 - 12878.253: 36.0926% ( 133) 00:13:37.503 12878.253 - 12935.490: 38.0098% ( 173) 00:13:37.503 12935.490 - 12992.727: 39.6720% ( 150) 00:13:37.503 12992.727 - 13049.963: 41.0793% ( 127) 00:13:37.503 13049.963 - 13107.200: 42.2429% ( 105) 00:13:37.503 13107.200 - 13164.437: 43.2402% ( 90) 00:13:37.503 13164.437 - 13221.673: 44.2154% ( 88) 00:13:37.503 13221.673 - 13278.910: 45.2017% ( 89) 00:13:37.503 13278.910 - 13336.147: 46.2988% ( 99) 00:13:37.503 13336.147 - 13393.383: 47.1077% ( 73) 00:13:37.503 13393.383 - 13450.620: 47.8502% ( 67) 00:13:37.503 13450.620 - 13507.857: 48.8364% ( 89) 00:13:37.503 13507.857 - 13565.093: 50.0887% ( 113) 00:13:37.503 13565.093 - 13622.330: 51.1636% ( 97) 00:13:37.503 13622.330 - 13679.567: 52.1055% ( 85) 00:13:37.503 13679.567 - 13736.803: 53.1361% ( 93) 00:13:37.503 13736.803 - 13794.040: 53.8785% ( 67) 00:13:37.503 13794.040 - 13851.277: 54.5102% ( 57) 00:13:37.503 13851.277 - 13908.514: 55.0864% ( 52) 00:13:37.503 13908.514 - 13965.750: 55.8067% ( 65) 00:13:37.503 13965.750 - 14022.987: 56.5381% ( 66) 00:13:37.503 14022.987 - 14080.224: 57.2363% ( 63) 00:13:37.503 14080.224 - 14137.460: 57.8347% ( 54) 00:13:37.503 14137.460 - 14194.697: 59.0426% ( 109) 00:13:37.503 14194.697 - 14251.934: 59.8958% ( 77) 00:13:37.503 14251.934 - 14309.170: 60.6051% ( 64) 00:13:37.503 14309.170 - 14366.407: 61.4251% ( 74) 00:13:37.503 14366.407 - 14423.644: 62.0457% ( 56) 00:13:37.503 14423.644 - 14480.880: 62.6773% ( 57) 00:13:37.503 14480.880 - 14538.117: 63.1760% ( 45) 00:13:37.503 14538.117 - 14595.354: 63.8409% ( 60) 00:13:37.503 14595.354 - 14652.590: 64.4504% ( 55) 00:13:37.503 14652.590 - 14767.064: 65.8245% ( 124) 00:13:37.503 14767.064 - 14881.537: 66.9880% ( 105) 00:13:37.503 14881.537 - 14996.010: 68.8387% ( 167) 00:13:37.503 14996.010 - 15110.484: 70.2238% ( 125) 00:13:37.503 15110.484 - 15224.957: 71.2434% ( 92) 00:13:37.503 15224.957 - 15339.431: 71.9415% ( 63) 00:13:37.503 15339.431 - 15453.904: 72.6840% ( 67) 00:13:37.503 15453.904 - 15568.377: 73.4707% ( 71) 00:13:37.503 15568.377 - 15682.851: 74.5346% ( 96) 00:13:37.503 15682.851 - 15797.324: 75.9087% ( 124) 00:13:37.503 15797.324 - 15911.797: 77.1387% ( 111) 00:13:37.503 15911.797 - 16026.271: 78.2358% ( 99) 00:13:37.503 16026.271 - 16140.744: 78.9783% ( 67) 00:13:37.503 16140.744 - 16255.217: 79.7872% ( 73) 00:13:37.503 16255.217 - 16369.691: 80.4410% ( 59) 00:13:37.503 16369.691 - 16484.164: 81.1946% ( 68) 00:13:37.503 16484.164 - 16598.638: 82.2584% ( 96) 00:13:37.503 16598.638 - 16713.111: 83.1560% ( 81) 00:13:37.503 16713.111 - 16827.584: 84.5191% ( 123) 00:13:37.503 16827.584 - 16942.058: 85.7159% ( 108) 00:13:37.503 16942.058 - 17056.531: 86.7465% ( 93) 00:13:37.503 17056.531 - 17171.004: 87.7438% ( 90) 00:13:37.503 17171.004 - 17285.478: 88.5860% ( 76) 00:13:37.503 17285.478 - 17399.951: 89.2066% ( 56) 00:13:37.503 17399.951 - 17514.424: 89.9712% ( 69) 00:13:37.503 17514.424 - 17628.898: 90.7580% ( 71) 00:13:37.503 17628.898 - 17743.371: 91.3896% ( 57) 00:13:37.503 17743.371 - 17857.845: 91.8994% ( 46) 00:13:37.503 17857.845 - 17972.318: 92.2762% ( 34) 00:13:37.503 17972.318 - 18086.791: 92.6418% ( 33) 00:13:37.503 18086.791 - 18201.265: 93.1516% ( 46) 00:13:37.503 18201.265 - 18315.738: 93.9273% ( 70) 00:13:37.503 18315.738 - 18430.211: 94.6365% ( 64) 00:13:37.503 18430.211 - 18544.685: 95.4344% ( 72) 00:13:37.503 18544.685 - 18659.158: 96.3431% ( 82) 00:13:37.503 18659.158 - 18773.631: 96.8639% ( 47) 00:13:37.503 18773.631 - 18888.105: 97.0966% ( 21) 00:13:37.503 18888.105 - 19002.578: 97.2961% ( 18) 00:13:37.503 19002.578 - 19117.052: 97.5066% ( 19) 00:13:37.503 19117.052 - 19231.525: 97.6618% ( 14) 00:13:37.503 19231.525 - 19345.998: 97.7504% ( 8) 00:13:37.503 19345.998 - 19460.472: 97.7837% ( 3) 00:13:37.503 19460.472 - 19574.945: 97.8280% ( 4) 00:13:37.503 19574.945 - 19689.418: 97.9499% ( 11) 00:13:37.503 19689.418 - 19803.892: 98.0386% ( 8) 00:13:37.503 19803.892 - 19918.365: 98.2270% ( 17) 00:13:37.503 19918.365 - 20032.838: 98.4375% ( 19) 00:13:37.503 20032.838 - 20147.312: 98.4818% ( 4) 00:13:37.503 20147.312 - 20261.785: 98.5262% ( 4) 00:13:37.503 20261.785 - 20376.259: 98.5816% ( 5) 00:13:37.503 27588.080 - 27702.554: 98.5926% ( 1) 00:13:37.503 27702.554 - 27817.027: 98.6148% ( 2) 00:13:37.503 27817.027 - 27931.500: 98.6370% ( 2) 00:13:37.503 28503.867 - 28618.341: 98.6591% ( 2) 00:13:37.503 28618.341 - 28732.814: 98.7035% ( 4) 00:13:37.503 28732.814 - 28847.287: 98.7256% ( 2) 00:13:37.503 28847.287 - 28961.761: 98.7589% ( 3) 00:13:37.503 28961.761 - 29076.234: 98.7810% ( 2) 00:13:37.503 29076.234 - 29190.707: 98.8254% ( 4) 00:13:37.503 29190.707 - 29305.181: 98.8586% ( 3) 00:13:37.503 29305.181 - 29534.128: 98.9251% ( 6) 00:13:37.503 29534.128 - 29763.074: 99.0027% ( 7) 00:13:37.503 29763.074 - 29992.021: 99.0691% ( 6) 00:13:37.503 29992.021 - 30220.968: 99.1135% ( 4) 00:13:37.503 30220.968 - 30449.914: 99.1689% ( 5) 00:13:37.503 30449.914 - 30678.861: 99.2243% ( 5) 00:13:37.503 30678.861 - 30907.808: 99.2686% ( 4) 00:13:37.503 30907.808 - 31136.755: 99.2908% ( 2) 00:13:37.503 37776.210 - 38005.156: 99.3573% ( 6) 00:13:37.503 38005.156 - 38234.103: 99.3794% ( 2) 00:13:37.503 38234.103 - 38463.050: 99.5013% ( 11) 00:13:37.503 38463.050 - 38691.997: 99.5235% ( 2) 00:13:37.503 38920.943 - 39149.890: 99.5789% ( 5) 00:13:37.503 39149.890 - 39378.837: 99.6343% ( 5) 00:13:37.503 39378.837 - 39607.783: 99.6897% ( 5) 00:13:37.503 39607.783 - 39836.730: 99.7562% ( 6) 00:13:37.503 39836.730 - 40065.677: 99.8227% ( 6) 00:13:37.503 40065.677 - 40294.624: 99.8892% ( 6) 00:13:37.503 40294.624 - 40523.570: 99.9446% ( 5) 00:13:37.503 40523.570 - 40752.517: 100.0000% ( 5) 00:13:37.503 00:13:37.503 18:17:30 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:13:37.503 00:13:37.503 real 0m2.575s 00:13:37.503 user 0m2.219s 00:13:37.503 sys 0m0.249s 00:13:37.503 18:17:30 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.503 18:17:30 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:13:37.503 ************************************ 00:13:37.503 END TEST nvme_perf 00:13:37.503 ************************************ 00:13:37.503 18:17:30 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:37.503 18:17:30 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:37.503 18:17:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.503 18:17:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.503 ************************************ 00:13:37.503 START TEST nvme_hello_world 00:13:37.503 ************************************ 00:13:37.503 18:17:30 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:13:37.503 Initializing NVMe Controllers 00:13:37.503 Attached to 0000:00:10.0 00:13:37.503 Namespace ID: 1 size: 6GB 00:13:37.503 Attached to 0000:00:11.0 00:13:37.503 Namespace ID: 1 size: 5GB 00:13:37.503 Attached to 0000:00:13.0 00:13:37.503 Namespace ID: 1 size: 1GB 00:13:37.503 Attached to 0000:00:12.0 00:13:37.503 Namespace ID: 1 size: 4GB 00:13:37.503 Namespace ID: 2 size: 4GB 00:13:37.503 Namespace ID: 3 size: 4GB 00:13:37.503 Initialization complete. 00:13:37.504 INFO: using host memory buffer for IO 00:13:37.504 Hello world! 00:13:37.504 INFO: using host memory buffer for IO 00:13:37.504 Hello world! 00:13:37.504 INFO: using host memory buffer for IO 00:13:37.504 Hello world! 00:13:37.504 INFO: using host memory buffer for IO 00:13:37.504 Hello world! 00:13:37.504 INFO: using host memory buffer for IO 00:13:37.504 Hello world! 00:13:37.504 INFO: using host memory buffer for IO 00:13:37.504 Hello world! 00:13:37.762 00:13:37.762 real 0m0.277s 00:13:37.762 user 0m0.119s 00:13:37.762 sys 0m0.114s 00:13:37.762 18:17:30 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.762 18:17:30 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:37.762 ************************************ 00:13:37.762 END TEST nvme_hello_world 00:13:37.762 ************************************ 00:13:37.762 18:17:30 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:37.762 18:17:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:37.762 18:17:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.762 18:17:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.762 ************************************ 00:13:37.762 START TEST nvme_sgl 00:13:37.762 ************************************ 00:13:37.762 18:17:30 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:13:38.020 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:13:38.020 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:13:38.020 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:13:38.020 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:13:38.020 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:13:38.020 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:13:38.020 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:13:38.020 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:13:38.020 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:13:38.020 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:13:38.020 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:13:38.020 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:13:38.020 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:13:38.020 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:13:38.020 NVMe Readv/Writev Request test 00:13:38.020 Attached to 0000:00:10.0 00:13:38.020 Attached to 0000:00:11.0 00:13:38.020 Attached to 0000:00:13.0 00:13:38.020 Attached to 0000:00:12.0 00:13:38.020 0000:00:10.0: build_io_request_2 test passed 00:13:38.020 0000:00:10.0: build_io_request_4 test passed 00:13:38.020 0000:00:10.0: build_io_request_5 test passed 00:13:38.020 0000:00:10.0: build_io_request_6 test passed 00:13:38.020 0000:00:10.0: build_io_request_7 test passed 00:13:38.020 0000:00:10.0: build_io_request_10 test passed 00:13:38.020 0000:00:11.0: build_io_request_2 test passed 00:13:38.020 0000:00:11.0: build_io_request_4 test passed 00:13:38.020 0000:00:11.0: build_io_request_5 test passed 00:13:38.020 0000:00:11.0: build_io_request_6 test passed 00:13:38.020 0000:00:11.0: build_io_request_7 test passed 00:13:38.020 0000:00:11.0: build_io_request_10 test passed 00:13:38.020 Cleaning up... 00:13:38.020 00:13:38.020 real 0m0.321s 00:13:38.020 user 0m0.152s 00:13:38.020 sys 0m0.127s 00:13:38.020 18:17:31 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.020 18:17:31 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:13:38.020 ************************************ 00:13:38.020 END TEST nvme_sgl 00:13:38.020 ************************************ 00:13:38.020 18:17:31 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:38.020 18:17:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:38.020 18:17:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.020 18:17:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.020 ************************************ 00:13:38.020 START TEST nvme_e2edp 00:13:38.020 ************************************ 00:13:38.020 18:17:31 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:13:38.279 NVMe Write/Read with End-to-End data protection test 00:13:38.279 Attached to 0000:00:10.0 00:13:38.279 Attached to 0000:00:11.0 00:13:38.279 Attached to 0000:00:13.0 00:13:38.279 Attached to 0000:00:12.0 00:13:38.279 Cleaning up... 00:13:38.279 00:13:38.279 real 0m0.251s 00:13:38.279 user 0m0.086s 00:13:38.279 sys 0m0.120s 00:13:38.279 18:17:31 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.279 18:17:31 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:13:38.279 ************************************ 00:13:38.279 END TEST nvme_e2edp 00:13:38.279 ************************************ 00:13:38.279 18:17:31 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:38.279 18:17:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:38.279 18:17:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.279 18:17:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.279 ************************************ 00:13:38.279 START TEST nvme_reserve 00:13:38.279 ************************************ 00:13:38.279 18:17:31 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:13:38.538 ===================================================== 00:13:38.538 NVMe Controller at PCI bus 0, device 16, function 0 00:13:38.538 ===================================================== 00:13:38.538 Reservations: Not Supported 00:13:38.538 ===================================================== 00:13:38.538 NVMe Controller at PCI bus 0, device 17, function 0 00:13:38.538 ===================================================== 00:13:38.538 Reservations: Not Supported 00:13:38.538 ===================================================== 00:13:38.538 NVMe Controller at PCI bus 0, device 19, function 0 00:13:38.538 ===================================================== 00:13:38.538 Reservations: Not Supported 00:13:38.538 ===================================================== 00:13:38.538 NVMe Controller at PCI bus 0, device 18, function 0 00:13:38.538 ===================================================== 00:13:38.538 Reservations: Not Supported 00:13:38.538 Reservation test passed 00:13:38.538 00:13:38.538 real 0m0.269s 00:13:38.538 user 0m0.095s 00:13:38.538 sys 0m0.131s 00:13:38.538 18:17:31 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.538 18:17:31 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:13:38.538 ************************************ 00:13:38.538 END TEST nvme_reserve 00:13:38.538 ************************************ 00:13:38.796 18:17:31 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:38.796 18:17:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:38.796 18:17:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.796 18:17:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.796 ************************************ 00:13:38.796 START TEST nvme_err_injection 00:13:38.796 ************************************ 00:13:38.796 18:17:31 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:13:39.055 NVMe Error Injection test 00:13:39.055 Attached to 0000:00:10.0 00:13:39.055 Attached to 0000:00:11.0 00:13:39.055 Attached to 0000:00:13.0 00:13:39.055 Attached to 0000:00:12.0 00:13:39.055 0000:00:10.0: get features failed as expected 00:13:39.055 0000:00:11.0: get features failed as expected 00:13:39.055 0000:00:13.0: get features failed as expected 00:13:39.055 0000:00:12.0: get features failed as expected 00:13:39.055 0000:00:10.0: get features successfully as expected 00:13:39.055 0000:00:11.0: get features successfully as expected 00:13:39.055 0000:00:13.0: get features successfully as expected 00:13:39.055 0000:00:12.0: get features successfully as expected 00:13:39.055 0000:00:10.0: read failed as expected 00:13:39.055 0000:00:11.0: read failed as expected 00:13:39.055 0000:00:13.0: read failed as expected 00:13:39.055 0000:00:12.0: read failed as expected 00:13:39.055 0000:00:11.0: read successfully as expected 00:13:39.055 0000:00:10.0: read successfully as expected 00:13:39.055 0000:00:13.0: read successfully as expected 00:13:39.055 0000:00:12.0: read successfully as expected 00:13:39.055 Cleaning up... 00:13:39.055 00:13:39.055 real 0m0.280s 00:13:39.055 user 0m0.112s 00:13:39.055 sys 0m0.126s 00:13:39.055 18:17:32 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.055 18:17:32 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:13:39.055 ************************************ 00:13:39.055 END TEST nvme_err_injection 00:13:39.055 ************************************ 00:13:39.055 18:17:32 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:39.055 18:17:32 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:13:39.055 18:17:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.055 18:17:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.055 ************************************ 00:13:39.055 START TEST nvme_overhead 00:13:39.055 ************************************ 00:13:39.055 18:17:32 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:13:40.431 Initializing NVMe Controllers 00:13:40.431 Attached to 0000:00:10.0 00:13:40.431 Attached to 0000:00:11.0 00:13:40.431 Attached to 0000:00:13.0 00:13:40.431 Attached to 0000:00:12.0 00:13:40.431 Initialization complete. Launching workers. 00:13:40.431 submit (in ns) avg, min, max = 12781.4, 9993.0, 55626.2 00:13:40.431 complete (in ns) avg, min, max = 7806.8, 6250.7, 64096.9 00:13:40.431 00:13:40.431 Submit histogram 00:13:40.431 ================ 00:13:40.431 Range in us Cumulative Count 00:13:40.431 9.949 - 10.005: 0.0075% ( 1) 00:13:40.431 10.173 - 10.229: 0.0149% ( 1) 00:13:40.431 10.564 - 10.620: 0.0224% ( 1) 00:13:40.431 10.676 - 10.732: 0.0299% ( 1) 00:13:40.431 10.732 - 10.788: 0.0374% ( 1) 00:13:40.431 10.788 - 10.844: 0.0523% ( 2) 00:13:40.431 10.844 - 10.900: 0.0747% ( 3) 00:13:40.431 10.900 - 10.955: 0.0822% ( 1) 00:13:40.431 10.955 - 11.011: 0.1494% ( 9) 00:13:40.431 11.011 - 11.067: 0.2017% ( 7) 00:13:40.431 11.067 - 11.123: 0.3213% ( 16) 00:13:40.431 11.123 - 11.179: 0.5006% ( 24) 00:13:40.431 11.179 - 11.235: 0.9564% ( 61) 00:13:40.431 11.235 - 11.291: 1.6288% ( 90) 00:13:40.431 11.291 - 11.347: 2.7570% ( 151) 00:13:40.431 11.347 - 11.403: 4.2140% ( 195) 00:13:40.431 11.403 - 11.459: 5.9175% ( 228) 00:13:40.431 11.459 - 11.514: 8.2785% ( 316) 00:13:40.431 11.514 - 11.570: 10.8637% ( 346) 00:13:40.431 11.570 - 11.626: 13.8748% ( 403) 00:13:40.431 11.626 - 11.682: 17.0950% ( 431) 00:13:40.431 11.682 - 11.738: 20.6291% ( 473) 00:13:40.431 11.738 - 11.794: 23.9465% ( 444) 00:13:40.431 11.794 - 11.850: 27.3087% ( 450) 00:13:40.431 11.850 - 11.906: 30.3796% ( 411) 00:13:40.431 11.906 - 11.962: 33.6148% ( 433) 00:13:40.431 11.962 - 12.017: 36.9546% ( 447) 00:13:40.431 12.017 - 12.073: 40.2047% ( 435) 00:13:40.431 12.073 - 12.129: 43.6491% ( 461) 00:13:40.431 12.129 - 12.185: 46.9964% ( 448) 00:13:40.431 12.185 - 12.241: 50.4632% ( 464) 00:13:40.431 12.241 - 12.297: 53.7358% ( 438) 00:13:40.431 12.297 - 12.353: 56.6348% ( 388) 00:13:40.431 12.353 - 12.409: 59.4068% ( 371) 00:13:40.431 12.409 - 12.465: 62.0218% ( 350) 00:13:40.431 12.465 - 12.521: 64.0541% ( 272) 00:13:40.431 12.521 - 12.576: 65.9220% ( 250) 00:13:40.431 12.576 - 12.632: 67.6255% ( 228) 00:13:40.431 12.632 - 12.688: 69.2319% ( 215) 00:13:40.431 12.688 - 12.744: 70.7487% ( 203) 00:13:40.431 12.744 - 12.800: 72.2579% ( 202) 00:13:40.431 12.800 - 12.856: 73.4833% ( 164) 00:13:40.431 12.856 - 12.912: 74.4919% ( 135) 00:13:40.431 12.912 - 12.968: 75.5305% ( 139) 00:13:40.431 12.968 - 13.024: 76.5915% ( 142) 00:13:40.431 13.024 - 13.079: 77.5329% ( 126) 00:13:40.431 13.079 - 13.135: 78.3473% ( 109) 00:13:40.431 13.135 - 13.191: 79.1243% ( 104) 00:13:40.431 13.191 - 13.247: 79.8341% ( 95) 00:13:40.431 13.247 - 13.303: 80.4991% ( 89) 00:13:40.431 13.303 - 13.359: 81.1342% ( 85) 00:13:40.431 13.359 - 13.415: 81.5900% ( 61) 00:13:40.431 13.415 - 13.471: 82.1055% ( 69) 00:13:40.431 13.471 - 13.527: 82.4866% ( 51) 00:13:40.431 13.527 - 13.583: 82.8078% ( 43) 00:13:40.431 13.583 - 13.638: 83.1216% ( 42) 00:13:40.431 13.638 - 13.694: 83.3757% ( 34) 00:13:40.431 13.694 - 13.750: 83.5625% ( 25) 00:13:40.431 13.750 - 13.806: 83.7567% ( 26) 00:13:40.431 13.806 - 13.862: 83.9136% ( 21) 00:13:40.431 13.862 - 13.918: 84.0855% ( 23) 00:13:40.431 13.918 - 13.974: 84.1677% ( 11) 00:13:40.431 13.974 - 14.030: 84.3246% ( 21) 00:13:40.431 14.030 - 14.086: 84.4740% ( 20) 00:13:40.431 14.086 - 14.141: 84.5935% ( 16) 00:13:40.431 14.141 - 14.197: 84.7280% ( 18) 00:13:40.431 14.197 - 14.253: 84.8252% ( 13) 00:13:40.431 14.253 - 14.309: 85.0344% ( 28) 00:13:40.431 14.309 - 14.421: 85.7442% ( 95) 00:13:40.431 14.421 - 14.533: 86.8799% ( 152) 00:13:40.431 14.533 - 14.645: 87.7466% ( 116) 00:13:40.431 14.645 - 14.756: 88.7627% ( 136) 00:13:40.431 14.756 - 14.868: 89.6294% ( 116) 00:13:40.431 14.868 - 14.980: 90.3990% ( 103) 00:13:40.431 14.980 - 15.092: 91.0714% ( 90) 00:13:40.431 15.092 - 15.203: 91.6019% ( 71) 00:13:40.431 15.203 - 15.315: 92.1548% ( 74) 00:13:40.431 15.315 - 15.427: 92.5882% ( 58) 00:13:40.431 15.427 - 15.539: 93.0738% ( 65) 00:13:40.431 15.539 - 15.651: 93.5520% ( 64) 00:13:40.431 15.651 - 15.762: 94.0825% ( 71) 00:13:40.431 15.762 - 15.874: 94.4411% ( 48) 00:13:40.431 15.874 - 15.986: 94.8745% ( 58) 00:13:40.431 15.986 - 16.098: 95.2406% ( 49) 00:13:40.431 16.098 - 16.210: 95.6366% ( 53) 00:13:40.431 16.210 - 16.321: 95.9504% ( 42) 00:13:40.431 16.321 - 16.433: 96.1895% ( 32) 00:13:40.431 16.433 - 16.545: 96.3314% ( 19) 00:13:40.431 16.545 - 16.657: 96.5332% ( 27) 00:13:40.431 16.657 - 16.769: 96.6826% ( 20) 00:13:40.431 16.769 - 16.880: 96.8470% ( 22) 00:13:40.431 16.880 - 16.992: 97.0412% ( 26) 00:13:40.431 16.992 - 17.104: 97.2056% ( 22) 00:13:40.431 17.104 - 17.216: 97.3625% ( 21) 00:13:40.431 17.216 - 17.328: 97.4821% ( 16) 00:13:40.431 17.328 - 17.439: 97.6539% ( 23) 00:13:40.431 17.439 - 17.551: 97.7361% ( 11) 00:13:40.431 17.551 - 17.663: 97.8482% ( 15) 00:13:40.431 17.663 - 17.775: 97.9528% ( 14) 00:13:40.431 17.775 - 17.886: 98.0275% ( 10) 00:13:40.431 17.886 - 17.998: 98.1470% ( 16) 00:13:40.431 17.998 - 18.110: 98.2890% ( 19) 00:13:40.431 18.110 - 18.222: 98.3637% ( 10) 00:13:40.431 18.222 - 18.334: 98.4384% ( 10) 00:13:40.431 18.334 - 18.445: 98.5132% ( 10) 00:13:40.431 18.445 - 18.557: 98.6252% ( 15) 00:13:40.431 18.557 - 18.669: 98.6850% ( 8) 00:13:40.431 18.669 - 18.781: 98.7672% ( 11) 00:13:40.431 18.781 - 18.893: 98.8718% ( 14) 00:13:40.431 18.893 - 19.004: 98.9316% ( 8) 00:13:40.431 19.004 - 19.116: 98.9839% ( 7) 00:13:40.431 19.116 - 19.228: 99.0212% ( 5) 00:13:40.431 19.228 - 19.340: 99.0885% ( 9) 00:13:40.431 19.340 - 19.452: 99.1557% ( 9) 00:13:40.431 19.452 - 19.563: 99.2230% ( 9) 00:13:40.431 19.563 - 19.675: 99.2528% ( 4) 00:13:40.431 19.675 - 19.787: 99.2902% ( 5) 00:13:40.431 19.787 - 19.899: 99.2977% ( 1) 00:13:40.431 19.899 - 20.010: 99.3201% ( 3) 00:13:40.431 20.010 - 20.122: 99.3649% ( 6) 00:13:40.431 20.122 - 20.234: 99.3873% ( 3) 00:13:40.431 20.234 - 20.346: 99.3948% ( 1) 00:13:40.431 20.346 - 20.458: 99.4172% ( 3) 00:13:40.431 20.458 - 20.569: 99.4322% ( 2) 00:13:40.431 20.569 - 20.681: 99.4471% ( 2) 00:13:40.431 20.681 - 20.793: 99.4546% ( 1) 00:13:40.431 20.793 - 20.905: 99.4620% ( 1) 00:13:40.431 20.905 - 21.017: 99.4770% ( 2) 00:13:40.431 21.017 - 21.128: 99.5069% ( 4) 00:13:40.431 21.128 - 21.240: 99.5143% ( 1) 00:13:40.431 21.352 - 21.464: 99.5218% ( 1) 00:13:40.431 21.464 - 21.576: 99.5293% ( 1) 00:13:40.431 21.576 - 21.687: 99.5666% ( 5) 00:13:40.431 21.799 - 21.911: 99.5741% ( 1) 00:13:40.431 22.023 - 22.134: 99.6040% ( 4) 00:13:40.431 22.134 - 22.246: 99.6189% ( 2) 00:13:40.431 22.246 - 22.358: 99.6264% ( 1) 00:13:40.431 22.582 - 22.693: 99.6339% ( 1) 00:13:40.431 22.693 - 22.805: 99.6414% ( 1) 00:13:40.431 22.805 - 22.917: 99.6488% ( 1) 00:13:40.431 22.917 - 23.029: 99.6712% ( 3) 00:13:40.431 23.141 - 23.252: 99.6787% ( 1) 00:13:40.431 23.252 - 23.364: 99.6862% ( 1) 00:13:40.431 23.364 - 23.476: 99.6937% ( 1) 00:13:40.431 23.700 - 23.811: 99.7086% ( 2) 00:13:40.431 23.811 - 23.923: 99.7161% ( 1) 00:13:40.431 23.923 - 24.035: 99.7236% ( 1) 00:13:40.431 24.147 - 24.259: 99.7385% ( 2) 00:13:40.431 24.259 - 24.370: 99.7460% ( 1) 00:13:40.431 24.370 - 24.482: 99.7609% ( 2) 00:13:40.431 24.594 - 24.706: 99.7684% ( 1) 00:13:40.431 24.817 - 24.929: 99.7983% ( 4) 00:13:40.431 25.265 - 25.376: 99.8057% ( 1) 00:13:40.432 25.488 - 25.600: 99.8132% ( 1) 00:13:40.432 25.600 - 25.712: 99.8207% ( 1) 00:13:40.432 25.824 - 25.935: 99.8282% ( 1) 00:13:40.432 26.271 - 26.383: 99.8356% ( 1) 00:13:40.432 26.383 - 26.494: 99.8431% ( 1) 00:13:40.432 26.494 - 26.606: 99.8506% ( 1) 00:13:40.432 26.830 - 26.941: 99.8580% ( 1) 00:13:40.432 26.941 - 27.053: 99.8655% ( 1) 00:13:40.432 27.053 - 27.165: 99.8730% ( 1) 00:13:40.432 27.277 - 27.389: 99.8805% ( 1) 00:13:40.432 27.389 - 27.500: 99.8879% ( 1) 00:13:40.432 27.500 - 27.612: 99.8954% ( 1) 00:13:40.432 27.948 - 28.059: 99.9029% ( 1) 00:13:40.432 28.059 - 28.171: 99.9103% ( 1) 00:13:40.432 28.618 - 28.842: 99.9178% ( 1) 00:13:40.432 29.736 - 29.960: 99.9253% ( 1) 00:13:40.432 29.960 - 30.183: 99.9328% ( 1) 00:13:40.432 30.631 - 30.854: 99.9402% ( 1) 00:13:40.432 30.854 - 31.078: 99.9477% ( 1) 00:13:40.432 33.314 - 33.537: 99.9552% ( 1) 00:13:40.432 33.761 - 33.984: 99.9626% ( 1) 00:13:40.432 38.009 - 38.232: 99.9701% ( 1) 00:13:40.432 42.257 - 42.480: 99.9776% ( 1) 00:13:40.432 42.928 - 43.151: 99.9851% ( 1) 00:13:40.432 45.163 - 45.387: 99.9925% ( 1) 00:13:40.432 55.448 - 55.672: 100.0000% ( 1) 00:13:40.432 00:13:40.432 Complete histogram 00:13:40.432 ================== 00:13:40.432 Range in us Cumulative Count 00:13:40.432 6.232 - 6.260: 0.0075% ( 1) 00:13:40.432 6.260 - 6.288: 0.0149% ( 1) 00:13:40.432 6.288 - 6.316: 0.0299% ( 2) 00:13:40.432 6.316 - 6.344: 0.0448% ( 2) 00:13:40.432 6.344 - 6.372: 0.0523% ( 1) 00:13:40.432 6.372 - 6.400: 0.0822% ( 4) 00:13:40.432 6.456 - 6.484: 0.0971% ( 2) 00:13:40.432 6.512 - 6.540: 0.1046% ( 1) 00:13:40.432 6.540 - 6.568: 0.1195% ( 2) 00:13:40.432 6.568 - 6.596: 0.1270% ( 1) 00:13:40.432 6.596 - 6.624: 0.1793% ( 7) 00:13:40.432 6.624 - 6.652: 0.4334% ( 34) 00:13:40.432 6.652 - 6.679: 1.3001% ( 116) 00:13:40.432 6.679 - 6.707: 2.6300% ( 178) 00:13:40.432 6.707 - 6.735: 4.4531% ( 244) 00:13:40.432 6.735 - 6.763: 6.8141% ( 316) 00:13:40.432 6.763 - 6.791: 9.0780% ( 303) 00:13:40.432 6.791 - 6.819: 11.3419% ( 303) 00:13:40.432 6.819 - 6.847: 13.6581% ( 310) 00:13:40.432 6.847 - 6.875: 15.8249% ( 290) 00:13:40.432 6.875 - 6.903: 17.9767% ( 288) 00:13:40.432 6.903 - 6.931: 20.0687% ( 280) 00:13:40.432 6.931 - 6.959: 22.1309% ( 276) 00:13:40.432 6.959 - 6.987: 24.0436% ( 256) 00:13:40.432 6.987 - 7.015: 25.6874% ( 220) 00:13:40.432 7.015 - 7.043: 27.4507% ( 236) 00:13:40.432 7.043 - 7.071: 28.9973% ( 207) 00:13:40.432 7.071 - 7.099: 30.3796% ( 185) 00:13:40.432 7.099 - 7.127: 31.6497% ( 170) 00:13:40.432 7.127 - 7.155: 32.9124% ( 169) 00:13:40.432 7.155 - 7.210: 35.4976% ( 346) 00:13:40.432 7.210 - 7.266: 38.1127% ( 350) 00:13:40.432 7.266 - 7.322: 41.1760% ( 410) 00:13:40.432 7.322 - 7.378: 44.7848% ( 483) 00:13:40.432 7.378 - 7.434: 49.0287% ( 568) 00:13:40.432 7.434 - 7.490: 54.6324% ( 750) 00:13:40.432 7.490 - 7.546: 61.5212% ( 922) 00:13:40.432 7.546 - 7.602: 67.2370% ( 765) 00:13:40.432 7.602 - 7.658: 70.6216% ( 453) 00:13:40.432 7.658 - 7.714: 73.4310% ( 376) 00:13:40.432 7.714 - 7.769: 75.6052% ( 291) 00:13:40.432 7.769 - 7.825: 77.1593% ( 208) 00:13:40.432 7.825 - 7.881: 78.1082% ( 127) 00:13:40.432 7.881 - 7.937: 79.0720% ( 129) 00:13:40.432 7.937 - 7.993: 79.8117% ( 99) 00:13:40.432 7.993 - 8.049: 80.4767% ( 89) 00:13:40.432 8.049 - 8.105: 81.1865% ( 95) 00:13:40.432 8.105 - 8.161: 81.9635% ( 104) 00:13:40.432 8.161 - 8.217: 82.8825% ( 123) 00:13:40.432 8.217 - 8.272: 83.4952% ( 82) 00:13:40.432 8.272 - 8.328: 83.9809% ( 65) 00:13:40.432 8.328 - 8.384: 84.2200% ( 32) 00:13:40.432 8.384 - 8.440: 84.3769% ( 21) 00:13:40.432 8.440 - 8.496: 84.5637% ( 25) 00:13:40.432 8.496 - 8.552: 84.6608% ( 13) 00:13:40.432 8.552 - 8.608: 84.7430% ( 11) 00:13:40.432 8.608 - 8.664: 84.8177% ( 10) 00:13:40.432 8.664 - 8.720: 84.8551% ( 5) 00:13:40.432 8.720 - 8.776: 84.9447% ( 12) 00:13:40.432 8.776 - 8.831: 85.0269% ( 11) 00:13:40.432 8.831 - 8.887: 85.1240% ( 13) 00:13:40.432 8.887 - 8.943: 85.2361% ( 15) 00:13:40.432 8.943 - 8.999: 85.3482% ( 15) 00:13:40.432 8.999 - 9.055: 85.4976% ( 20) 00:13:40.432 9.055 - 9.111: 85.6246% ( 17) 00:13:40.432 9.111 - 9.167: 85.8936% ( 36) 00:13:40.432 9.167 - 9.223: 86.2149% ( 43) 00:13:40.432 9.223 - 9.279: 86.4764% ( 35) 00:13:40.432 9.279 - 9.334: 86.7230% ( 33) 00:13:40.432 9.334 - 9.390: 87.1712% ( 60) 00:13:40.432 9.390 - 9.446: 88.4190% ( 167) 00:13:40.432 9.446 - 9.502: 89.9133% ( 200) 00:13:40.432 9.502 - 9.558: 91.3778% ( 196) 00:13:40.432 9.558 - 9.614: 92.3416% ( 129) 00:13:40.432 9.614 - 9.670: 93.0663% ( 97) 00:13:40.432 9.670 - 9.726: 93.6118% ( 73) 00:13:40.432 9.726 - 9.782: 93.9331% ( 43) 00:13:40.432 9.782 - 9.838: 94.2469% ( 42) 00:13:40.432 9.838 - 9.893: 94.5607% ( 42) 00:13:40.432 9.893 - 9.949: 94.8296% ( 36) 00:13:40.432 9.949 - 10.005: 95.0314% ( 27) 00:13:40.432 10.005 - 10.061: 95.2256% ( 26) 00:13:40.432 10.061 - 10.117: 95.5320% ( 41) 00:13:40.432 10.117 - 10.173: 95.7711% ( 32) 00:13:40.432 10.173 - 10.229: 95.9728% ( 27) 00:13:40.432 10.229 - 10.285: 96.0550% ( 11) 00:13:40.432 10.285 - 10.341: 96.1745% ( 16) 00:13:40.432 10.341 - 10.397: 96.2717% ( 13) 00:13:40.432 10.397 - 10.452: 96.3539% ( 11) 00:13:40.432 10.452 - 10.508: 96.4510% ( 13) 00:13:40.432 10.508 - 10.564: 96.5182% ( 9) 00:13:40.432 10.564 - 10.620: 96.5780% ( 8) 00:13:40.432 10.620 - 10.676: 96.6602% ( 11) 00:13:40.432 10.676 - 10.732: 96.7424% ( 11) 00:13:40.432 10.732 - 10.788: 96.8395% ( 13) 00:13:40.432 10.788 - 10.844: 96.9142% ( 10) 00:13:40.432 10.844 - 10.900: 96.9740% ( 8) 00:13:40.432 10.900 - 10.955: 97.0338% ( 8) 00:13:40.432 10.955 - 11.011: 97.0935% ( 8) 00:13:40.432 11.011 - 11.067: 97.1683% ( 10) 00:13:40.432 11.067 - 11.123: 97.2131% ( 6) 00:13:40.432 11.123 - 11.179: 97.2579% ( 6) 00:13:40.432 11.179 - 11.235: 97.3102% ( 7) 00:13:40.432 11.235 - 11.291: 97.3252% ( 2) 00:13:40.432 11.291 - 11.347: 97.3476% ( 3) 00:13:40.432 11.347 - 11.403: 97.3775% ( 4) 00:13:40.432 11.403 - 11.459: 97.3924% ( 2) 00:13:40.432 11.459 - 11.514: 97.4522% ( 8) 00:13:40.432 11.514 - 11.570: 97.5045% ( 7) 00:13:40.432 11.570 - 11.626: 97.5269% ( 3) 00:13:40.432 11.626 - 11.682: 97.5418% ( 2) 00:13:40.432 11.682 - 11.738: 97.5643% ( 3) 00:13:40.432 11.738 - 11.794: 97.5792% ( 2) 00:13:40.432 11.794 - 11.850: 97.6016% ( 3) 00:13:40.432 11.850 - 11.906: 97.6315% ( 4) 00:13:40.432 11.906 - 11.962: 97.6614% ( 4) 00:13:40.432 11.962 - 12.017: 97.6838% ( 3) 00:13:40.432 12.017 - 12.073: 97.7062% ( 3) 00:13:40.432 12.073 - 12.129: 97.7436% ( 5) 00:13:40.432 12.129 - 12.185: 97.8332% ( 12) 00:13:40.432 12.185 - 12.241: 97.9079% ( 10) 00:13:40.432 12.241 - 12.297: 97.9378% ( 4) 00:13:40.432 12.297 - 12.353: 97.9528% ( 2) 00:13:40.432 12.353 - 12.409: 97.9752% ( 3) 00:13:40.432 12.409 - 12.465: 98.0051% ( 4) 00:13:40.432 12.465 - 12.521: 98.0200% ( 2) 00:13:40.432 12.521 - 12.576: 98.0499% ( 4) 00:13:40.432 12.576 - 12.632: 98.0723% ( 3) 00:13:40.432 12.632 - 12.688: 98.1172% ( 6) 00:13:40.432 12.688 - 12.744: 98.1769% ( 8) 00:13:40.432 12.744 - 12.800: 98.2292% ( 7) 00:13:40.432 12.800 - 12.856: 98.2666% ( 5) 00:13:40.432 12.856 - 12.912: 98.2965% ( 4) 00:13:40.432 12.912 - 12.968: 98.3413% ( 6) 00:13:40.432 12.968 - 13.024: 98.3787% ( 5) 00:13:40.432 13.024 - 13.079: 98.4160% ( 5) 00:13:40.432 13.079 - 13.135: 98.4758% ( 8) 00:13:40.432 13.135 - 13.191: 98.5281% ( 7) 00:13:40.432 13.191 - 13.247: 98.5505% ( 3) 00:13:40.432 13.247 - 13.303: 98.6028% ( 7) 00:13:40.432 13.303 - 13.359: 98.6252% ( 3) 00:13:40.432 13.359 - 13.415: 98.6402% ( 2) 00:13:40.432 13.415 - 13.471: 98.6626% ( 3) 00:13:40.432 13.471 - 13.527: 98.6999% ( 5) 00:13:40.432 13.527 - 13.583: 98.7672% ( 9) 00:13:40.432 13.583 - 13.638: 98.8344% ( 9) 00:13:40.432 13.638 - 13.694: 98.8568% ( 3) 00:13:40.432 13.694 - 13.750: 98.9017% ( 6) 00:13:40.432 13.750 - 13.806: 98.9316% ( 4) 00:13:40.432 13.806 - 13.862: 98.9689% ( 5) 00:13:40.432 13.862 - 13.918: 98.9839% ( 2) 00:13:40.432 13.918 - 13.974: 98.9913% ( 1) 00:13:40.432 13.974 - 14.030: 99.0063% ( 2) 00:13:40.432 14.030 - 14.086: 99.0137% ( 1) 00:13:40.432 14.141 - 14.197: 99.0287% ( 2) 00:13:40.432 14.197 - 14.253: 99.0362% ( 1) 00:13:40.432 14.253 - 14.309: 99.0436% ( 1) 00:13:40.432 14.309 - 14.421: 99.0660% ( 3) 00:13:40.432 14.421 - 14.533: 99.1034% ( 5) 00:13:40.432 14.645 - 14.756: 99.1184% ( 2) 00:13:40.432 14.756 - 14.868: 99.1482% ( 4) 00:13:40.432 14.868 - 14.980: 99.1557% ( 1) 00:13:40.433 14.980 - 15.092: 99.1707% ( 2) 00:13:40.433 15.203 - 15.315: 99.1781% ( 1) 00:13:40.433 15.315 - 15.427: 99.1856% ( 1) 00:13:40.433 15.427 - 15.539: 99.2230% ( 5) 00:13:40.433 15.539 - 15.651: 99.2304% ( 1) 00:13:40.433 15.651 - 15.762: 99.2603% ( 4) 00:13:40.433 15.762 - 15.874: 99.2827% ( 3) 00:13:40.433 15.874 - 15.986: 99.2977% ( 2) 00:13:40.433 15.986 - 16.098: 99.3051% ( 1) 00:13:40.433 16.098 - 16.210: 99.3126% ( 1) 00:13:40.433 16.210 - 16.321: 99.3276% ( 2) 00:13:40.433 16.321 - 16.433: 99.3500% ( 3) 00:13:40.433 16.433 - 16.545: 99.3649% ( 2) 00:13:40.433 16.657 - 16.769: 99.3724% ( 1) 00:13:40.433 16.769 - 16.880: 99.3948% ( 3) 00:13:40.433 16.880 - 16.992: 99.4172% ( 3) 00:13:40.433 16.992 - 17.104: 99.4620% ( 6) 00:13:40.433 17.104 - 17.216: 99.4770% ( 2) 00:13:40.433 17.216 - 17.328: 99.5143% ( 5) 00:13:40.433 17.328 - 17.439: 99.5368% ( 3) 00:13:40.433 17.439 - 17.551: 99.5517% ( 2) 00:13:40.433 17.551 - 17.663: 99.5666% ( 2) 00:13:40.433 17.663 - 17.775: 99.5891% ( 3) 00:13:40.433 17.775 - 17.886: 99.5965% ( 1) 00:13:40.433 17.886 - 17.998: 99.6189% ( 3) 00:13:40.433 17.998 - 18.110: 99.6414% ( 3) 00:13:40.433 18.110 - 18.222: 99.6563% ( 2) 00:13:40.433 18.222 - 18.334: 99.6787% ( 3) 00:13:40.433 18.334 - 18.445: 99.6937% ( 2) 00:13:40.433 18.445 - 18.557: 99.7086% ( 2) 00:13:40.433 18.557 - 18.669: 99.7161% ( 1) 00:13:40.433 18.669 - 18.781: 99.7310% ( 2) 00:13:40.433 18.781 - 18.893: 99.7460% ( 2) 00:13:40.433 19.116 - 19.228: 99.7609% ( 2) 00:13:40.433 19.228 - 19.340: 99.7684% ( 1) 00:13:40.433 19.340 - 19.452: 99.7833% ( 2) 00:13:40.433 19.452 - 19.563: 99.7908% ( 1) 00:13:40.433 19.563 - 19.675: 99.8057% ( 2) 00:13:40.433 19.899 - 20.010: 99.8207% ( 2) 00:13:40.433 20.681 - 20.793: 99.8282% ( 1) 00:13:40.433 20.905 - 21.017: 99.8356% ( 1) 00:13:40.433 21.240 - 21.352: 99.8431% ( 1) 00:13:40.433 21.576 - 21.687: 99.8506% ( 1) 00:13:40.433 21.687 - 21.799: 99.8580% ( 1) 00:13:40.433 22.358 - 22.470: 99.8655% ( 1) 00:13:40.433 22.693 - 22.805: 99.8879% ( 3) 00:13:40.433 23.141 - 23.252: 99.9029% ( 2) 00:13:40.433 23.700 - 23.811: 99.9103% ( 1) 00:13:40.433 23.923 - 24.035: 99.9253% ( 2) 00:13:40.433 24.706 - 24.817: 99.9328% ( 1) 00:13:40.433 25.935 - 26.047: 99.9402% ( 1) 00:13:40.433 27.836 - 27.948: 99.9477% ( 1) 00:13:40.433 30.407 - 30.631: 99.9552% ( 1) 00:13:40.433 31.748 - 31.972: 99.9626% ( 1) 00:13:40.433 34.208 - 34.431: 99.9701% ( 1) 00:13:40.433 36.891 - 37.114: 99.9776% ( 1) 00:13:40.433 38.456 - 38.679: 99.9851% ( 1) 00:13:40.433 42.704 - 42.928: 99.9925% ( 1) 00:13:40.433 63.944 - 64.391: 100.0000% ( 1) 00:13:40.433 00:13:40.433 00:13:40.433 real 0m1.302s 00:13:40.433 user 0m1.109s 00:13:40.433 sys 0m0.138s 00:13:40.433 18:17:33 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.433 18:17:33 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:13:40.433 ************************************ 00:13:40.433 END TEST nvme_overhead 00:13:40.433 ************************************ 00:13:40.433 18:17:33 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:40.433 18:17:33 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:40.433 18:17:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.433 18:17:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.433 ************************************ 00:13:40.433 START TEST nvme_arbitration 00:13:40.433 ************************************ 00:13:40.433 18:17:33 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:13:43.717 Initializing NVMe Controllers 00:13:43.717 Attached to 0000:00:10.0 00:13:43.717 Attached to 0000:00:11.0 00:13:43.717 Attached to 0000:00:13.0 00:13:43.717 Attached to 0000:00:12.0 00:13:43.717 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:13:43.717 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:13:43.717 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:13:43.717 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:13:43.717 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:13:43.717 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:13:43.717 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:43.717 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:13:43.717 Initialization complete. Launching workers. 00:13:43.717 Starting thread on core 1 with urgent priority queue 00:13:43.717 Starting thread on core 2 with urgent priority queue 00:13:43.717 Starting thread on core 3 with urgent priority queue 00:13:43.717 Starting thread on core 0 with urgent priority queue 00:13:43.717 QEMU NVMe Ctrl (12340 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:13:43.717 QEMU NVMe Ctrl (12342 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:13:43.717 QEMU NVMe Ctrl (12341 ) core 1: 490.67 IO/s 203.80 secs/100000 ios 00:13:43.717 QEMU NVMe Ctrl (12342 ) core 1: 490.67 IO/s 203.80 secs/100000 ios 00:13:43.717 QEMU NVMe Ctrl (12343 ) core 2: 469.33 IO/s 213.07 secs/100000 ios 00:13:43.717 QEMU NVMe Ctrl (12342 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:13:43.717 ======================================================== 00:13:43.717 00:13:43.717 00:13:43.717 real 0m3.417s 00:13:43.717 user 0m9.426s 00:13:43.717 sys 0m0.145s 00:13:43.717 18:17:37 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.717 18:17:37 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:13:43.717 ************************************ 00:13:43.717 END TEST nvme_arbitration 00:13:43.717 ************************************ 00:13:43.975 18:17:37 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:43.975 18:17:37 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:43.976 18:17:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.976 18:17:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.976 ************************************ 00:13:43.976 START TEST nvme_single_aen 00:13:43.976 ************************************ 00:13:43.976 18:17:37 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:44.234 Asynchronous Event Request test 00:13:44.234 Attached to 0000:00:10.0 00:13:44.234 Attached to 0000:00:11.0 00:13:44.234 Attached to 0000:00:13.0 00:13:44.234 Attached to 0000:00:12.0 00:13:44.234 Reset controller to setup AER completions for this process 00:13:44.234 Registering asynchronous event callbacks... 00:13:44.234 Getting orig temperature thresholds of all controllers 00:13:44.234 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:44.234 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:44.234 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:44.234 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:44.234 Setting all controllers temperature threshold low to trigger AER 00:13:44.234 Waiting for all controllers temperature threshold to be set lower 00:13:44.234 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:44.234 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:44.234 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:44.234 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:44.234 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:44.234 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:44.234 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:44.234 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:44.234 Waiting for all controllers to trigger AER and reset threshold 00:13:44.234 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:44.234 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:44.234 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:44.234 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:44.234 Cleaning up... 00:13:44.234 00:13:44.234 real 0m0.253s 00:13:44.234 user 0m0.092s 00:13:44.234 sys 0m0.109s 00:13:44.234 18:17:37 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.234 18:17:37 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:13:44.234 ************************************ 00:13:44.234 END TEST nvme_single_aen 00:13:44.234 ************************************ 00:13:44.234 18:17:37 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:13:44.234 18:17:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.234 18:17:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.234 18:17:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.234 ************************************ 00:13:44.234 START TEST nvme_doorbell_aers 00:13:44.234 ************************************ 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:44.234 18:17:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:44.493 [2024-11-26 18:17:37.778713] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:13:54.464 Executing: test_write_invalid_db 00:13:54.464 Waiting for AER completion... 00:13:54.464 Failure: test_write_invalid_db 00:13:54.464 00:13:54.464 Executing: test_invalid_db_write_overflow_sq 00:13:54.464 Waiting for AER completion... 00:13:54.464 Failure: test_invalid_db_write_overflow_sq 00:13:54.464 00:13:54.464 Executing: test_invalid_db_write_overflow_cq 00:13:54.464 Waiting for AER completion... 00:13:54.464 Failure: test_invalid_db_write_overflow_cq 00:13:54.464 00:13:54.464 18:17:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:54.464 18:17:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:54.722 [2024-11-26 18:17:47.831718] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:04.694 Executing: test_write_invalid_db 00:14:04.694 Waiting for AER completion... 00:14:04.694 Failure: test_write_invalid_db 00:14:04.694 00:14:04.694 Executing: test_invalid_db_write_overflow_sq 00:14:04.694 Waiting for AER completion... 00:14:04.694 Failure: test_invalid_db_write_overflow_sq 00:14:04.694 00:14:04.694 Executing: test_invalid_db_write_overflow_cq 00:14:04.694 Waiting for AER completion... 00:14:04.694 Failure: test_invalid_db_write_overflow_cq 00:14:04.694 00:14:04.694 18:17:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:04.694 18:17:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:04.694 [2024-11-26 18:17:57.862612] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:14.750 Executing: test_write_invalid_db 00:14:14.750 Waiting for AER completion... 00:14:14.750 Failure: test_write_invalid_db 00:14:14.750 00:14:14.750 Executing: test_invalid_db_write_overflow_sq 00:14:14.750 Waiting for AER completion... 00:14:14.750 Failure: test_invalid_db_write_overflow_sq 00:14:14.750 00:14:14.750 Executing: test_invalid_db_write_overflow_cq 00:14:14.750 Waiting for AER completion... 00:14:14.750 Failure: test_invalid_db_write_overflow_cq 00:14:14.750 00:14:14.750 18:18:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:14.750 18:18:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:14.750 [2024-11-26 18:18:07.923185] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.719 Executing: test_write_invalid_db 00:14:24.719 Waiting for AER completion... 00:14:24.719 Failure: test_write_invalid_db 00:14:24.719 00:14:24.719 Executing: test_invalid_db_write_overflow_sq 00:14:24.719 Waiting for AER completion... 00:14:24.719 Failure: test_invalid_db_write_overflow_sq 00:14:24.719 00:14:24.719 Executing: test_invalid_db_write_overflow_cq 00:14:24.719 Waiting for AER completion... 00:14:24.719 Failure: test_invalid_db_write_overflow_cq 00:14:24.719 00:14:24.719 00:14:24.719 real 0m40.280s 00:14:24.719 user 0m33.242s 00:14:24.719 sys 0m6.669s 00:14:24.719 18:18:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.719 18:18:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:14:24.719 ************************************ 00:14:24.719 END TEST nvme_doorbell_aers 00:14:24.719 ************************************ 00:14:24.719 18:18:17 nvme -- nvme/nvme.sh@97 -- # uname 00:14:24.719 18:18:17 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:14:24.719 18:18:17 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:24.719 18:18:17 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:14:24.719 18:18:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.719 18:18:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:24.719 ************************************ 00:14:24.719 START TEST nvme_multi_aen 00:14:24.719 ************************************ 00:14:24.719 18:18:17 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:24.719 [2024-11-26 18:18:17.989126] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.719 [2024-11-26 18:18:17.989221] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.719 [2024-11-26 18:18:17.989235] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.719 [2024-11-26 18:18:17.990392] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.719 [2024-11-26 18:18:17.990427] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.720 [2024-11-26 18:18:17.990437] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.720 [2024-11-26 18:18:17.991277] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.720 [2024-11-26 18:18:17.991313] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.720 [2024-11-26 18:18:17.991323] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.720 [2024-11-26 18:18:17.992165] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.720 [2024-11-26 18:18:17.992198] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.720 [2024-11-26 18:18:17.992208] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64907) is not found. Dropping the request. 00:14:24.720 Child process pid: 65428 00:14:25.010 [Child] Asynchronous Event Request test 00:14:25.010 [Child] Attached to 0000:00:10.0 00:14:25.010 [Child] Attached to 0000:00:11.0 00:14:25.010 [Child] Attached to 0000:00:13.0 00:14:25.010 [Child] Attached to 0000:00:12.0 00:14:25.010 [Child] Registering asynchronous event callbacks... 00:14:25.010 [Child] Getting orig temperature thresholds of all controllers 00:14:25.010 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:25.010 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:25.010 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:25.010 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:25.010 [Child] Waiting for all controllers to trigger AER and reset threshold 00:14:25.010 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:25.010 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:25.010 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:25.010 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:25.010 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:25.010 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:25.010 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:25.010 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:25.010 [Child] Cleaning up... 00:14:25.010 Asynchronous Event Request test 00:14:25.010 Attached to 0000:00:10.0 00:14:25.010 Attached to 0000:00:11.0 00:14:25.010 Attached to 0000:00:13.0 00:14:25.010 Attached to 0000:00:12.0 00:14:25.010 Reset controller to setup AER completions for this process 00:14:25.010 Registering asynchronous event callbacks... 00:14:25.010 Getting orig temperature thresholds of all controllers 00:14:25.010 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:25.010 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:25.010 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:25.010 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:25.011 Setting all controllers temperature threshold low to trigger AER 00:14:25.011 Waiting for all controllers temperature threshold to be set lower 00:14:25.011 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:25.011 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:25.011 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:25.011 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:25.011 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:25.011 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:25.011 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:25.011 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:25.011 Waiting for all controllers to trigger AER and reset threshold 00:14:25.011 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:25.011 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:25.011 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:25.011 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:25.011 Cleaning up... 00:14:25.011 00:14:25.011 real 0m0.555s 00:14:25.011 user 0m0.185s 00:14:25.011 sys 0m0.274s 00:14:25.011 18:18:18 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.011 18:18:18 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:14:25.011 ************************************ 00:14:25.011 END TEST nvme_multi_aen 00:14:25.011 ************************************ 00:14:25.269 18:18:18 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:25.269 18:18:18 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:25.269 18:18:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.269 18:18:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.269 ************************************ 00:14:25.269 START TEST nvme_startup 00:14:25.269 ************************************ 00:14:25.269 18:18:18 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:25.527 Initializing NVMe Controllers 00:14:25.527 Attached to 0000:00:10.0 00:14:25.527 Attached to 0000:00:11.0 00:14:25.527 Attached to 0000:00:13.0 00:14:25.527 Attached to 0000:00:12.0 00:14:25.527 Initialization complete. 00:14:25.527 Time used:191568.094 (us). 00:14:25.527 00:14:25.527 real 0m0.286s 00:14:25.527 user 0m0.093s 00:14:25.527 sys 0m0.139s 00:14:25.527 18:18:18 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.527 18:18:18 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:14:25.527 ************************************ 00:14:25.527 END TEST nvme_startup 00:14:25.527 ************************************ 00:14:25.527 18:18:18 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:14:25.527 18:18:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:25.527 18:18:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.527 18:18:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.527 ************************************ 00:14:25.527 START TEST nvme_multi_secondary 00:14:25.527 ************************************ 00:14:25.527 18:18:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:14:25.527 18:18:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65484 00:14:25.527 18:18:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:14:25.527 18:18:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65485 00:14:25.527 18:18:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:25.527 18:18:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:14:28.809 Initializing NVMe Controllers 00:14:28.809 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:28.809 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:28.809 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:28.809 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:28.809 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:28.809 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:28.809 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:28.809 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:28.809 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:28.809 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:28.809 Initialization complete. Launching workers. 00:14:28.809 ======================================================== 00:14:28.809 Latency(us) 00:14:28.809 Device Information : IOPS MiB/s Average min max 00:14:28.809 PCIE (0000:00:10.0) NSID 1 from core 1: 6037.78 23.59 2647.81 1077.64 8005.79 00:14:28.809 PCIE (0000:00:11.0) NSID 1 from core 1: 6037.78 23.59 2649.51 1083.15 7996.06 00:14:28.809 PCIE (0000:00:13.0) NSID 1 from core 1: 6037.78 23.59 2649.71 1104.71 8008.81 00:14:28.809 PCIE (0000:00:12.0) NSID 1 from core 1: 6037.78 23.59 2649.85 1107.18 7806.48 00:14:28.809 PCIE (0000:00:12.0) NSID 2 from core 1: 6037.78 23.59 2650.16 1097.59 7931.07 00:14:28.809 PCIE (0000:00:12.0) NSID 3 from core 1: 6037.78 23.59 2650.25 1103.75 7973.50 00:14:28.809 ======================================================== 00:14:28.809 Total : 36226.66 141.51 2649.55 1077.64 8008.81 00:14:28.809 00:14:29.068 Initializing NVMe Controllers 00:14:29.068 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:29.068 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:29.068 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:29.068 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:29.068 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:29.068 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:29.068 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:29.068 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:29.068 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:29.068 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:29.068 Initialization complete. Launching workers. 00:14:29.068 ======================================================== 00:14:29.068 Latency(us) 00:14:29.068 Device Information : IOPS MiB/s Average min max 00:14:29.068 PCIE (0000:00:10.0) NSID 1 from core 2: 3139.93 12.27 5094.16 1131.07 13212.27 00:14:29.068 PCIE (0000:00:11.0) NSID 1 from core 2: 3139.93 12.27 5095.45 1223.87 16824.47 00:14:29.068 PCIE (0000:00:13.0) NSID 1 from core 2: 3139.93 12.27 5094.88 1318.52 17162.69 00:14:29.068 PCIE (0000:00:12.0) NSID 1 from core 2: 3139.93 12.27 5094.75 1300.96 13975.05 00:14:29.068 PCIE (0000:00:12.0) NSID 2 from core 2: 3139.93 12.27 5095.03 1313.65 13770.69 00:14:29.068 PCIE (0000:00:12.0) NSID 3 from core 2: 3139.93 12.27 5094.89 1202.30 13525.20 00:14:29.068 ======================================================== 00:14:29.068 Total : 18839.58 73.59 5094.86 1131.07 17162.69 00:14:29.068 00:14:29.068 18:18:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65484 00:14:30.976 Initializing NVMe Controllers 00:14:30.976 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:30.976 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:30.976 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:30.976 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:30.976 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:30.976 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:30.976 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:30.976 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:30.976 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:30.976 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:30.976 Initialization complete. Launching workers. 00:14:30.976 ======================================================== 00:14:30.976 Latency(us) 00:14:30.976 Device Information : IOPS MiB/s Average min max 00:14:30.976 PCIE (0000:00:10.0) NSID 1 from core 0: 9425.84 36.82 1695.90 841.72 5883.11 00:14:30.976 PCIE (0000:00:11.0) NSID 1 from core 0: 9425.64 36.82 1697.02 860.03 6084.25 00:14:30.976 PCIE (0000:00:13.0) NSID 1 from core 0: 9425.84 36.82 1696.94 835.55 6055.59 00:14:30.976 PCIE (0000:00:12.0) NSID 1 from core 0: 9425.84 36.82 1696.91 828.44 5787.09 00:14:30.976 PCIE (0000:00:12.0) NSID 2 from core 0: 9425.84 36.82 1696.87 784.23 5823.85 00:14:30.976 PCIE (0000:00:12.0) NSID 3 from core 0: 9425.84 36.82 1696.85 754.08 5718.10 00:14:30.976 ======================================================== 00:14:30.976 Total : 56554.84 220.92 1696.75 754.08 6084.25 00:14:30.976 00:14:30.976 18:18:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65485 00:14:30.976 18:18:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65554 00:14:30.976 18:18:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:14:30.976 18:18:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65555 00:14:30.976 18:18:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:14:30.976 18:18:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:34.269 Initializing NVMe Controllers 00:14:34.269 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:34.269 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:34.269 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:34.269 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:34.269 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:34.269 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:34.269 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:34.269 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:34.269 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:34.269 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:34.269 Initialization complete. Launching workers. 00:14:34.269 ======================================================== 00:14:34.269 Latency(us) 00:14:34.269 Device Information : IOPS MiB/s Average min max 00:14:34.269 PCIE (0000:00:10.0) NSID 1 from core 0: 6046.72 23.62 2643.96 838.29 7171.26 00:14:34.269 PCIE (0000:00:11.0) NSID 1 from core 0: 6047.06 23.62 2645.84 869.87 7170.36 00:14:34.269 PCIE (0000:00:13.0) NSID 1 from core 0: 6047.06 23.62 2646.04 878.38 7093.59 00:14:34.269 PCIE (0000:00:12.0) NSID 1 from core 0: 6047.06 23.62 2646.34 863.28 7375.04 00:14:34.269 PCIE (0000:00:12.0) NSID 2 from core 0: 6047.06 23.62 2646.42 859.18 7355.83 00:14:34.269 PCIE (0000:00:12.0) NSID 3 from core 0: 6051.72 23.64 2644.45 858.53 7535.25 00:14:34.269 ======================================================== 00:14:34.269 Total : 36286.67 141.74 2645.51 838.29 7535.25 00:14:34.269 00:14:34.269 Initializing NVMe Controllers 00:14:34.269 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:34.269 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:34.269 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:34.269 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:34.269 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:34.269 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:34.269 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:34.269 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:34.269 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:34.269 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:34.269 Initialization complete. Launching workers. 00:14:34.269 ======================================================== 00:14:34.269 Latency(us) 00:14:34.269 Device Information : IOPS MiB/s Average min max 00:14:34.269 PCIE (0000:00:10.0) NSID 1 from core 1: 5843.07 22.82 2735.93 857.76 9703.33 00:14:34.269 PCIE (0000:00:11.0) NSID 1 from core 1: 5843.07 22.82 2737.52 871.39 8609.33 00:14:34.269 PCIE (0000:00:13.0) NSID 1 from core 1: 5843.07 22.82 2737.44 892.56 8993.54 00:14:34.269 PCIE (0000:00:12.0) NSID 1 from core 1: 5843.07 22.82 2737.35 897.67 9472.18 00:14:34.269 PCIE (0000:00:12.0) NSID 2 from core 1: 5843.07 22.82 2737.28 887.58 9630.37 00:14:34.269 PCIE (0000:00:12.0) NSID 3 from core 1: 5843.07 22.82 2737.20 801.73 9779.62 00:14:34.269 ======================================================== 00:14:34.269 Total : 35058.40 136.95 2737.12 801.73 9779.62 00:14:34.269 00:14:36.841 Initializing NVMe Controllers 00:14:36.841 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:36.841 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:36.841 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:36.841 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:36.841 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:36.841 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:36.841 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:36.841 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:36.841 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:36.841 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:36.841 Initialization complete. Launching workers. 00:14:36.841 ======================================================== 00:14:36.841 Latency(us) 00:14:36.841 Device Information : IOPS MiB/s Average min max 00:14:36.841 PCIE (0000:00:10.0) NSID 1 from core 2: 3350.41 13.09 4772.47 909.55 13705.83 00:14:36.841 PCIE (0000:00:11.0) NSID 1 from core 2: 3350.41 13.09 4774.70 947.43 14136.43 00:14:36.841 PCIE (0000:00:13.0) NSID 1 from core 2: 3350.41 13.09 4775.08 946.07 16849.15 00:14:36.841 PCIE (0000:00:12.0) NSID 1 from core 2: 3350.41 13.09 4774.98 940.20 14078.42 00:14:36.841 PCIE (0000:00:12.0) NSID 2 from core 2: 3350.41 13.09 4774.88 942.47 14025.20 00:14:36.841 PCIE (0000:00:12.0) NSID 3 from core 2: 3350.41 13.09 4774.77 932.99 14289.82 00:14:36.841 ======================================================== 00:14:36.841 Total : 20102.43 78.53 4774.48 909.55 16849.15 00:14:36.841 00:14:36.841 18:18:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65554 00:14:36.841 18:18:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65555 00:14:36.841 00:14:36.841 real 0m11.075s 00:14:36.841 user 0m18.546s 00:14:36.841 sys 0m0.934s 00:14:36.841 18:18:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.841 18:18:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:14:36.841 ************************************ 00:14:36.841 END TEST nvme_multi_secondary 00:14:36.841 ************************************ 00:14:36.841 18:18:29 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:14:36.841 18:18:29 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:14:36.841 18:18:29 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64492 ]] 00:14:36.841 18:18:29 nvme -- common/autotest_common.sh@1094 -- # kill 64492 00:14:36.841 18:18:29 nvme -- common/autotest_common.sh@1095 -- # wait 64492 00:14:36.841 [2024-11-26 18:18:29.840356] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.840491] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.840663] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.840748] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.848411] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.848544] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.848645] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.848727] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.854173] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.854252] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.854297] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.854347] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.859492] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.859574] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.859640] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 [2024-11-26 18:18:29.859694] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65427) is not found. Dropping the request. 00:14:36.841 18:18:30 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:14:36.841 18:18:30 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:14:36.841 18:18:30 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:36.841 18:18:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:36.841 18:18:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.841 18:18:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:36.841 ************************************ 00:14:36.841 START TEST bdev_nvme_reset_stuck_adm_cmd 00:14:36.841 ************************************ 00:14:36.841 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:36.841 * Looking for test storage... 00:14:36.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:36.841 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:36.841 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:36.841 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:37.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.101 --rc genhtml_branch_coverage=1 00:14:37.101 --rc genhtml_function_coverage=1 00:14:37.101 --rc genhtml_legend=1 00:14:37.101 --rc geninfo_all_blocks=1 00:14:37.101 --rc geninfo_unexecuted_blocks=1 00:14:37.101 00:14:37.101 ' 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:37.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.101 --rc genhtml_branch_coverage=1 00:14:37.101 --rc genhtml_function_coverage=1 00:14:37.101 --rc genhtml_legend=1 00:14:37.101 --rc geninfo_all_blocks=1 00:14:37.101 --rc geninfo_unexecuted_blocks=1 00:14:37.101 00:14:37.101 ' 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:37.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.101 --rc genhtml_branch_coverage=1 00:14:37.101 --rc genhtml_function_coverage=1 00:14:37.101 --rc genhtml_legend=1 00:14:37.101 --rc geninfo_all_blocks=1 00:14:37.101 --rc geninfo_unexecuted_blocks=1 00:14:37.101 00:14:37.101 ' 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:37.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.101 --rc genhtml_branch_coverage=1 00:14:37.101 --rc genhtml_function_coverage=1 00:14:37.101 --rc genhtml_legend=1 00:14:37.101 --rc geninfo_all_blocks=1 00:14:37.101 --rc geninfo_unexecuted_blocks=1 00:14:37.101 00:14:37.101 ' 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:14:37.101 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65717 00:14:37.102 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:37.102 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65717 00:14:37.102 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:14:37.102 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65717 ']' 00:14:37.102 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.102 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.102 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.102 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.102 18:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:37.102 [2024-11-26 18:18:30.431313] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:14:37.102 [2024-11-26 18:18:30.431473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65717 ] 00:14:37.361 [2024-11-26 18:18:30.632384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:37.621 [2024-11-26 18:18:30.753991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.621 [2024-11-26 18:18:30.754180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.621 [2024-11-26 18:18:30.754497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.621 [2024-11-26 18:18:30.754564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.558 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.558 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:14:38.558 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:14:38.558 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.558 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:38.558 nvme0n1 00:14:38.558 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.558 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:14:38.558 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_qk8HR.txt 00:14:38.559 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:14:38.559 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.559 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:38.559 true 00:14:38.559 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.559 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:14:38.559 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732645111 00:14:38.559 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65745 00:14:38.559 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:38.559 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:14:38.559 18:18:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:14:40.467 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:14:40.467 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.467 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:40.727 [2024-11-26 18:18:33.807222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:14:40.727 [2024-11-26 18:18:33.807555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:40.727 [2024-11-26 18:18:33.807585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:40.727 [2024-11-26 18:18:33.807599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.727 [2024-11-26 18:18:33.809188] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65745 00:14:40.727 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65745 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65745 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_qk8HR.txt 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_qk8HR.txt 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65717 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65717 ']' 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65717 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65717 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65717' 00:14:40.727 killing process with pid 65717 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65717 00:14:40.727 18:18:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65717 00:14:43.264 18:18:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:14:43.264 18:18:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:14:43.264 ************************************ 00:14:43.264 END TEST bdev_nvme_reset_stuck_adm_cmd 00:14:43.264 ************************************ 00:14:43.264 00:14:43.264 real 0m6.501s 00:14:43.264 user 0m22.952s 00:14:43.264 sys 0m0.726s 00:14:43.264 18:18:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.264 18:18:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:43.264 18:18:36 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:14:43.264 18:18:36 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:14:43.264 18:18:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:43.264 18:18:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.264 18:18:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:43.264 ************************************ 00:14:43.264 START TEST nvme_fio 00:14:43.264 ************************************ 00:14:43.264 18:18:36 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:14:43.264 18:18:36 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:43.264 18:18:36 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:14:43.264 18:18:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:14:43.264 18:18:36 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:43.523 18:18:36 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:14:43.523 18:18:36 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:43.523 18:18:36 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:43.523 18:18:36 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:43.523 18:18:36 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:43.523 18:18:36 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:43.523 18:18:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:14:43.523 18:18:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:14:43.523 18:18:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:43.523 18:18:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:43.523 18:18:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:43.783 18:18:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:43.783 18:18:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:44.042 18:18:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:44.042 18:18:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:44.042 18:18:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:14:44.301 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:44.301 fio-3.35 00:14:44.301 Starting 1 thread 00:14:50.865 00:14:50.865 test: (groupid=0, jobs=1): err= 0: pid=65899: Tue Nov 26 18:18:42 2024 00:14:50.865 read: IOPS=22.3k, BW=86.9MiB/s (91.2MB/s)(174MiB/2001msec) 00:14:50.865 slat (nsec): min=4366, max=74151, avg=5253.31, stdev=1205.18 00:14:50.865 clat (usec): min=231, max=11256, avg=2865.31, stdev=399.37 00:14:50.865 lat (usec): min=236, max=11331, avg=2870.57, stdev=400.07 00:14:50.865 clat percentiles (usec): 00:14:50.865 | 1.00th=[ 2343], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2769], 00:14:50.865 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:14:50.865 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 2999], 00:14:50.865 | 99.00th=[ 4490], 99.50th=[ 5800], 99.90th=[ 7635], 99.95th=[ 8160], 00:14:50.865 | 99.99th=[10945] 00:14:50.865 bw ( KiB/s): min=86944, max=90384, per=99.07%, avg=88197.33, stdev=1900.46, samples=3 00:14:50.865 iops : min=21736, max=22596, avg=22049.33, stdev=475.11, samples=3 00:14:50.865 write: IOPS=22.1k, BW=86.3MiB/s (90.5MB/s)(173MiB/2001msec); 0 zone resets 00:14:50.865 slat (nsec): min=4460, max=74495, avg=5520.55, stdev=1364.59 00:14:50.865 clat (usec): min=208, max=11049, avg=2874.20, stdev=413.07 00:14:50.865 lat (usec): min=214, max=11066, avg=2879.72, stdev=413.80 00:14:50.865 clat percentiles (usec): 00:14:50.865 | 1.00th=[ 2376], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:14:50.865 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:14:50.865 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3032], 00:14:50.865 | 99.00th=[ 4555], 99.50th=[ 5932], 99.90th=[ 7767], 99.95th=[ 8586], 00:14:50.865 | 99.99th=[10552] 00:14:50.865 bw ( KiB/s): min=86664, max=90200, per=99.94%, avg=88362.67, stdev=1772.07, samples=3 00:14:50.865 iops : min=21666, max=22550, avg=22090.67, stdev=443.02, samples=3 00:14:50.865 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:14:50.865 lat (msec) : 2=0.60%, 4=97.93%, 10=1.41%, 20=0.02% 00:14:50.865 cpu : usr=99.35%, sys=0.00%, ctx=4, majf=0, minf=607 00:14:50.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:50.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:50.865 issued rwts: total=44535,44230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:50.865 00:14:50.865 Run status group 0 (all jobs): 00:14:50.865 READ: bw=86.9MiB/s (91.2MB/s), 86.9MiB/s-86.9MiB/s (91.2MB/s-91.2MB/s), io=174MiB (182MB), run=2001-2001msec 00:14:50.865 WRITE: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=173MiB (181MB), run=2001-2001msec 00:14:50.865 ----------------------------------------------------- 00:14:50.865 Suppressions used: 00:14:50.865 count bytes template 00:14:50.865 1 32 /usr/src/fio/parse.c 00:14:50.865 1 8 libtcmalloc_minimal.so 00:14:50.865 ----------------------------------------------------- 00:14:50.865 00:14:50.865 18:18:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:50.865 18:18:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:50.865 18:18:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:50.865 18:18:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:50.865 18:18:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:50.865 18:18:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:50.866 18:18:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:50.866 18:18:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:50.866 18:18:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:14:50.866 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:50.866 fio-3.35 00:14:50.866 Starting 1 thread 00:14:57.432 00:14:57.432 test: (groupid=0, jobs=1): err= 0: pid=65987: Tue Nov 26 18:18:49 2024 00:14:57.432 read: IOPS=22.8k, BW=89.0MiB/s (93.3MB/s)(178MiB/2001msec) 00:14:57.432 slat (nsec): min=4347, max=61918, avg=5200.57, stdev=1029.03 00:14:57.432 clat (usec): min=219, max=10920, avg=2799.58, stdev=252.48 00:14:57.433 lat (usec): min=224, max=10982, avg=2804.79, stdev=252.93 00:14:57.433 clat percentiles (usec): 00:14:57.433 | 1.00th=[ 2606], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:14:57.433 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2769], 60.00th=[ 2802], 00:14:57.433 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2900], 95.00th=[ 2933], 00:14:57.433 | 99.00th=[ 3195], 99.50th=[ 4146], 99.90th=[ 6128], 99.95th=[ 7898], 00:14:57.433 | 99.99th=[10552] 00:14:57.433 bw ( KiB/s): min=88544, max=92272, per=99.31%, avg=90466.67, stdev=1866.77, samples=3 00:14:57.433 iops : min=22136, max=23068, avg=22616.67, stdev=466.69, samples=3 00:14:57.433 write: IOPS=22.6k, BW=88.4MiB/s (92.7MB/s)(177MiB/2001msec); 0 zone resets 00:14:57.433 slat (nsec): min=4566, max=59977, avg=5445.76, stdev=1060.36 00:14:57.433 clat (usec): min=260, max=10659, avg=2806.38, stdev=259.53 00:14:57.433 lat (usec): min=265, max=10676, avg=2811.83, stdev=259.98 00:14:57.433 clat percentiles (usec): 00:14:57.433 | 1.00th=[ 2606], 5.00th=[ 2671], 10.00th=[ 2671], 20.00th=[ 2704], 00:14:57.433 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:14:57.433 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2900], 95.00th=[ 2933], 00:14:57.433 | 99.00th=[ 3228], 99.50th=[ 4424], 99.90th=[ 6194], 99.95th=[ 8225], 00:14:57.433 | 99.99th=[10290] 00:14:57.433 bw ( KiB/s): min=87944, max=92256, per=100.00%, avg=90714.67, stdev=2404.53, samples=3 00:14:57.433 iops : min=21986, max=23064, avg=22678.67, stdev=601.13, samples=3 00:14:57.433 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:14:57.433 lat (msec) : 2=0.05%, 4=99.35%, 10=0.54%, 20=0.02% 00:14:57.433 cpu : usr=99.40%, sys=0.00%, ctx=18, majf=0, minf=606 00:14:57.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:57.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:57.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:57.433 issued rwts: total=45572,45309,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:57.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:57.433 00:14:57.433 Run status group 0 (all jobs): 00:14:57.433 READ: bw=89.0MiB/s (93.3MB/s), 89.0MiB/s-89.0MiB/s (93.3MB/s-93.3MB/s), io=178MiB (187MB), run=2001-2001msec 00:14:57.433 WRITE: bw=88.4MiB/s (92.7MB/s), 88.4MiB/s-88.4MiB/s (92.7MB/s-92.7MB/s), io=177MiB (186MB), run=2001-2001msec 00:14:57.433 ----------------------------------------------------- 00:14:57.433 Suppressions used: 00:14:57.433 count bytes template 00:14:57.433 1 32 /usr/src/fio/parse.c 00:14:57.433 1 8 libtcmalloc_minimal.so 00:14:57.433 ----------------------------------------------------- 00:14:57.433 00:14:57.433 18:18:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:14:57.433 18:18:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:14:57.433 18:18:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:14:57.433 18:18:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:57.433 18:18:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:57.433 18:18:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:14:57.433 18:18:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:14:57.433 18:18:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:57.433 18:18:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:14:57.691 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:57.691 fio-3.35 00:14:57.691 Starting 1 thread 00:15:04.252 00:15:04.252 test: (groupid=0, jobs=1): err= 0: pid=66081: Tue Nov 26 18:18:57 2024 00:15:04.252 read: IOPS=22.7k, BW=88.8MiB/s (93.2MB/s)(178MiB/2001msec) 00:15:04.252 slat (nsec): min=4448, max=63939, avg=5207.59, stdev=1095.94 00:15:04.252 clat (usec): min=232, max=11694, avg=2803.35, stdev=317.73 00:15:04.252 lat (usec): min=237, max=11758, avg=2808.56, stdev=318.22 00:15:04.252 clat percentiles (usec): 00:15:04.252 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:15:04.252 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:15:04.252 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 3097], 00:15:04.253 | 99.00th=[ 3523], 99.50th=[ 4293], 99.90th=[ 6718], 99.95th=[ 8455], 00:15:04.253 | 99.99th=[11338] 00:15:04.253 bw ( KiB/s): min=87792, max=93216, per=98.85%, avg=89928.00, stdev=2889.68, samples=3 00:15:04.253 iops : min=21948, max=23304, avg=22482.00, stdev=722.42, samples=3 00:15:04.253 write: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec); 0 zone resets 00:15:04.253 slat (nsec): min=4548, max=59645, avg=5426.74, stdev=1091.82 00:15:04.253 clat (usec): min=254, max=11489, avg=2810.16, stdev=330.70 00:15:04.253 lat (usec): min=260, max=11506, avg=2815.59, stdev=331.18 00:15:04.253 clat percentiles (usec): 00:15:04.253 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:15:04.253 | 30.00th=[ 2737], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:15:04.253 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2900], 95.00th=[ 3097], 00:15:04.253 | 99.00th=[ 3523], 99.50th=[ 4359], 99.90th=[ 7373], 99.95th=[ 8848], 00:15:04.253 | 99.99th=[10945] 00:15:04.253 bw ( KiB/s): min=87360, max=92680, per=99.68%, avg=90146.67, stdev=2669.03, samples=3 00:15:04.253 iops : min=21840, max=23170, avg=22536.67, stdev=667.26, samples=3 00:15:04.253 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:15:04.253 lat (msec) : 2=0.09%, 4=99.19%, 10=0.65%, 20=0.03% 00:15:04.253 cpu : usr=99.25%, sys=0.15%, ctx=4, majf=0, minf=606 00:15:04.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:04.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.253 issued rwts: total=45508,45241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.253 00:15:04.253 Run status group 0 (all jobs): 00:15:04.253 READ: bw=88.8MiB/s (93.2MB/s), 88.8MiB/s-88.8MiB/s (93.2MB/s-93.2MB/s), io=178MiB (186MB), run=2001-2001msec 00:15:04.253 WRITE: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec 00:15:04.526 ----------------------------------------------------- 00:15:04.526 Suppressions used: 00:15:04.526 count bytes template 00:15:04.526 1 32 /usr/src/fio/parse.c 00:15:04.526 1 8 libtcmalloc_minimal.so 00:15:04.526 ----------------------------------------------------- 00:15:04.526 00:15:04.526 18:18:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:04.526 18:18:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:04.526 18:18:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:04.526 18:18:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:04.783 18:18:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:04.783 18:18:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:05.091 18:18:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:05.091 18:18:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:05.091 18:18:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:05.349 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:05.350 fio-3.35 00:15:05.350 Starting 1 thread 00:15:15.321 00:15:15.321 test: (groupid=0, jobs=1): err= 0: pid=66181: Tue Nov 26 18:19:08 2024 00:15:15.321 read: IOPS=22.3k, BW=87.2MiB/s (91.4MB/s)(175MiB/2001msec) 00:15:15.321 slat (nsec): min=4328, max=59174, avg=5347.01, stdev=1380.19 00:15:15.321 clat (usec): min=280, max=11995, avg=2857.68, stdev=497.07 00:15:15.321 lat (usec): min=286, max=12054, avg=2863.03, stdev=497.97 00:15:15.321 clat percentiles (usec): 00:15:15.321 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:15:15.321 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:15:15.321 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 3032], 00:15:15.321 | 99.00th=[ 5866], 99.50th=[ 6587], 99.90th=[ 7767], 99.95th=[ 9110], 00:15:15.321 | 99.99th=[11731] 00:15:15.321 bw ( KiB/s): min=86632, max=92640, per=99.86%, avg=89173.33, stdev=3109.05, samples=3 00:15:15.321 iops : min=21658, max=23160, avg=22293.33, stdev=777.26, samples=3 00:15:15.321 write: IOPS=22.2k, BW=86.6MiB/s (90.8MB/s)(173MiB/2001msec); 0 zone resets 00:15:15.321 slat (nsec): min=4458, max=75219, avg=5551.12, stdev=1455.94 00:15:15.321 clat (usec): min=211, max=11839, avg=2863.91, stdev=502.56 00:15:15.321 lat (usec): min=216, max=11856, avg=2869.46, stdev=503.45 00:15:15.321 clat percentiles (usec): 00:15:15.321 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:15:15.321 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:15:15.321 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 3064], 00:15:15.321 | 99.00th=[ 5800], 99.50th=[ 6587], 99.90th=[ 7898], 99.95th=[ 9372], 00:15:15.321 | 99.99th=[11469] 00:15:15.321 bw ( KiB/s): min=86272, max=92312, per=100.00%, avg=89360.00, stdev=3022.30, samples=3 00:15:15.321 iops : min=21568, max=23078, avg=22340.00, stdev=755.57, samples=3 00:15:15.321 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:15:15.321 lat (msec) : 2=0.12%, 4=97.68%, 10=2.12%, 20=0.03% 00:15:15.321 cpu : usr=99.30%, sys=0.05%, ctx=4, majf=0, minf=604 00:15:15.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:15.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.321 issued rwts: total=44672,44377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.321 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.321 00:15:15.321 Run status group 0 (all jobs): 00:15:15.321 READ: bw=87.2MiB/s (91.4MB/s), 87.2MiB/s-87.2MiB/s (91.4MB/s-91.4MB/s), io=175MiB (183MB), run=2001-2001msec 00:15:15.321 WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=173MiB (182MB), run=2001-2001msec 00:15:15.321 ----------------------------------------------------- 00:15:15.321 Suppressions used: 00:15:15.321 count bytes template 00:15:15.321 1 32 /usr/src/fio/parse.c 00:15:15.321 1 8 libtcmalloc_minimal.so 00:15:15.321 ----------------------------------------------------- 00:15:15.321 00:15:15.321 18:19:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:15.321 18:19:08 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:15:15.321 00:15:15.321 real 0m32.015s 00:15:15.321 user 0m17.190s 00:15:15.321 sys 0m28.125s 00:15:15.321 18:19:08 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.322 18:19:08 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:15:15.322 ************************************ 00:15:15.322 END TEST nvme_fio 00:15:15.322 ************************************ 00:15:15.581 ************************************ 00:15:15.581 END TEST nvme 00:15:15.581 ************************************ 00:15:15.581 00:15:15.581 real 1m46.491s 00:15:15.581 user 3m49.428s 00:15:15.581 sys 0m41.830s 00:15:15.581 18:19:08 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.581 18:19:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.581 18:19:08 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:15:15.581 18:19:08 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:15.581 18:19:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:15.581 18:19:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.581 18:19:08 -- common/autotest_common.sh@10 -- # set +x 00:15:15.581 ************************************ 00:15:15.581 START TEST nvme_scc 00:15:15.581 ************************************ 00:15:15.581 18:19:08 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:15.581 * Looking for test storage... 00:15:15.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:15.581 18:19:08 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:15.581 18:19:08 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:15.581 18:19:08 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:15.840 18:19:08 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@345 -- # : 1 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@368 -- # return 0 00:15:15.840 18:19:08 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.840 18:19:08 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:15.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.840 --rc genhtml_branch_coverage=1 00:15:15.840 --rc genhtml_function_coverage=1 00:15:15.840 --rc genhtml_legend=1 00:15:15.840 --rc geninfo_all_blocks=1 00:15:15.840 --rc geninfo_unexecuted_blocks=1 00:15:15.840 00:15:15.840 ' 00:15:15.840 18:19:08 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:15.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.840 --rc genhtml_branch_coverage=1 00:15:15.840 --rc genhtml_function_coverage=1 00:15:15.840 --rc genhtml_legend=1 00:15:15.840 --rc geninfo_all_blocks=1 00:15:15.840 --rc geninfo_unexecuted_blocks=1 00:15:15.840 00:15:15.840 ' 00:15:15.840 18:19:08 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:15.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.840 --rc genhtml_branch_coverage=1 00:15:15.840 --rc genhtml_function_coverage=1 00:15:15.840 --rc genhtml_legend=1 00:15:15.840 --rc geninfo_all_blocks=1 00:15:15.840 --rc geninfo_unexecuted_blocks=1 00:15:15.840 00:15:15.840 ' 00:15:15.840 18:19:08 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:15.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.840 --rc genhtml_branch_coverage=1 00:15:15.840 --rc genhtml_function_coverage=1 00:15:15.840 --rc genhtml_legend=1 00:15:15.840 --rc geninfo_all_blocks=1 00:15:15.840 --rc geninfo_unexecuted_blocks=1 00:15:15.840 00:15:15.840 ' 00:15:15.840 18:19:08 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.840 18:19:08 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.840 18:19:08 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.840 18:19:08 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.840 18:19:08 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.840 18:19:08 nvme_scc -- paths/export.sh@5 -- # export PATH 00:15:15.840 18:19:08 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:15.840 18:19:08 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:15:15.840 18:19:08 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:15.840 18:19:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:15:15.840 18:19:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:15:15.840 18:19:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:15:15.840 18:19:08 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:16.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:16.419 Waiting for block devices as requested 00:15:16.693 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:16.693 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:16.693 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:16.952 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:22.239 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:22.239 18:19:15 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:22.239 18:19:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:22.239 18:19:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:22.239 18:19:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:22.239 18:19:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.239 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.240 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:22.241 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:22.242 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.243 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.244 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.245 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:22.246 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.247 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:22.248 18:19:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:22.248 18:19:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:22.248 18:19:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:22.248 18:19:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.248 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.249 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:22.250 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.251 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.252 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:15:22.253 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.254 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.255 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.256 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:22.257 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:22.258 18:19:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:22.258 18:19:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:22.258 18:19:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:22.258 18:19:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:22.258 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:22.259 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.260 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.261 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.262 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:15:22.263 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.264 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.265 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:15:22.266 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.267 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.268 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:15:22.554 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:22.555 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:22.556 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.557 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.558 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.559 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:22.560 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:22.561 18:19:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:22.561 18:19:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:22.561 18:19:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:22.561 18:19:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:22.561 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.562 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.563 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:22.564 18:19:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:22.564 18:19:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:15:22.565 18:19:15 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:15:22.565 18:19:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:15:22.565 18:19:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:15:22.565 18:19:15 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:23.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:24.070 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.070 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.070 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.070 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:24.070 18:19:17 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:24.070 18:19:17 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:24.070 18:19:17 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.070 18:19:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:24.070 ************************************ 00:15:24.070 START TEST nvme_simple_copy 00:15:24.070 ************************************ 00:15:24.070 18:19:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:24.329 Initializing NVMe Controllers 00:15:24.329 Attaching to 0000:00:10.0 00:15:24.329 Controller supports SCC. Attached to 0000:00:10.0 00:15:24.329 Namespace ID: 1 size: 6GB 00:15:24.329 Initialization complete. 00:15:24.329 00:15:24.329 Controller QEMU NVMe Ctrl (12340 ) 00:15:24.329 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:15:24.329 Namespace Block Size:4096 00:15:24.329 Writing LBAs 0 to 63 with Random Data 00:15:24.329 Copied LBAs from 0 - 63 to the Destination LBA 256 00:15:24.329 LBAs matching Written Data: 64 00:15:24.329 00:15:24.329 real 0m0.297s 00:15:24.330 user 0m0.102s 00:15:24.330 sys 0m0.092s 00:15:24.330 18:19:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.330 18:19:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:15:24.330 ************************************ 00:15:24.330 END TEST nvme_simple_copy 00:15:24.330 ************************************ 00:15:24.330 ************************************ 00:15:24.330 END TEST nvme_scc 00:15:24.330 ************************************ 00:15:24.330 00:15:24.330 real 0m8.890s 00:15:24.330 user 0m1.540s 00:15:24.330 sys 0m2.324s 00:15:24.330 18:19:17 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.330 18:19:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:24.590 18:19:17 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:15:24.590 18:19:17 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:15:24.590 18:19:17 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:15:24.590 18:19:17 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:15:24.590 18:19:17 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:15:24.590 18:19:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:24.590 18:19:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.590 18:19:17 -- common/autotest_common.sh@10 -- # set +x 00:15:24.590 ************************************ 00:15:24.590 START TEST nvme_fdp 00:15:24.591 ************************************ 00:15:24.591 18:19:17 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:15:24.591 * Looking for test storage... 00:15:24.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:24.591 18:19:17 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:24.591 18:19:17 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:15:24.591 18:19:17 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:24.591 18:19:17 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.591 18:19:17 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:15:24.591 18:19:17 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.591 18:19:17 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.591 --rc genhtml_branch_coverage=1 00:15:24.591 --rc genhtml_function_coverage=1 00:15:24.591 --rc genhtml_legend=1 00:15:24.591 --rc geninfo_all_blocks=1 00:15:24.591 --rc geninfo_unexecuted_blocks=1 00:15:24.591 00:15:24.591 ' 00:15:24.591 18:19:17 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.591 --rc genhtml_branch_coverage=1 00:15:24.591 --rc genhtml_function_coverage=1 00:15:24.591 --rc genhtml_legend=1 00:15:24.591 --rc geninfo_all_blocks=1 00:15:24.591 --rc geninfo_unexecuted_blocks=1 00:15:24.591 00:15:24.591 ' 00:15:24.591 18:19:17 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.591 --rc genhtml_branch_coverage=1 00:15:24.591 --rc genhtml_function_coverage=1 00:15:24.591 --rc genhtml_legend=1 00:15:24.591 --rc geninfo_all_blocks=1 00:15:24.591 --rc geninfo_unexecuted_blocks=1 00:15:24.591 00:15:24.591 ' 00:15:24.591 18:19:17 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.591 --rc genhtml_branch_coverage=1 00:15:24.591 --rc genhtml_function_coverage=1 00:15:24.591 --rc genhtml_legend=1 00:15:24.591 --rc geninfo_all_blocks=1 00:15:24.591 --rc geninfo_unexecuted_blocks=1 00:15:24.591 00:15:24.591 ' 00:15:24.591 18:19:17 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:24.591 18:19:17 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:24.591 18:19:17 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.851 18:19:17 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.851 18:19:17 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.851 18:19:17 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.851 18:19:17 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.851 18:19:17 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.851 18:19:17 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.851 18:19:17 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.851 18:19:17 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:15:24.851 18:19:17 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:24.851 18:19:17 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:15:24.851 18:19:17 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.851 18:19:17 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:25.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:25.369 Waiting for block devices as requested 00:15:25.628 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:25.628 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:25.628 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:25.887 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:31.174 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:31.174 18:19:24 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:31.174 18:19:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:31.174 18:19:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:31.174 18:19:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:31.174 18:19:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:31.174 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.175 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:31.176 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.177 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.178 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.179 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:31.180 18:19:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:31.180 18:19:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:31.180 18:19:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:31.180 18:19:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.180 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:31.181 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.182 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.183 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:31.184 18:19:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:31.184 18:19:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:31.184 18:19:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:31.184 18:19:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.184 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:31.185 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.186 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:15:31.187 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.188 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.189 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:31.190 18:19:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.191 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.192 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.193 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.455 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.456 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:31.457 18:19:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:31.457 18:19:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:31.457 18:19:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:31.457 18:19:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:31.457 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.458 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.459 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:31.460 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:31.461 18:19:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:15:31.461 18:19:24 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:15:31.461 18:19:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:15:31.461 18:19:24 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:15:31.461 18:19:24 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:32.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:32.968 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.968 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.968 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.968 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:32.968 18:19:26 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:32.968 18:19:26 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:32.968 18:19:26 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.968 18:19:26 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:32.968 ************************************ 00:15:32.968 START TEST nvme_flexible_data_placement 00:15:32.968 ************************************ 00:15:32.968 18:19:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:33.228 Initializing NVMe Controllers 00:15:33.228 Attaching to 0000:00:13.0 00:15:33.228 Controller supports FDP Attached to 0000:00:13.0 00:15:33.228 Namespace ID: 1 Endurance Group ID: 1 00:15:33.228 Initialization complete. 00:15:33.228 00:15:33.228 ================================== 00:15:33.228 == FDP tests for Namespace: #01 == 00:15:33.228 ================================== 00:15:33.228 00:15:33.228 Get Feature: FDP: 00:15:33.228 ================= 00:15:33.228 Enabled: Yes 00:15:33.228 FDP configuration Index: 0 00:15:33.228 00:15:33.228 FDP configurations log page 00:15:33.228 =========================== 00:15:33.228 Number of FDP configurations: 1 00:15:33.228 Version: 0 00:15:33.228 Size: 112 00:15:33.228 FDP Configuration Descriptor: 0 00:15:33.228 Descriptor Size: 96 00:15:33.228 Reclaim Group Identifier format: 2 00:15:33.228 FDP Volatile Write Cache: Not Present 00:15:33.228 FDP Configuration: Valid 00:15:33.228 Vendor Specific Size: 0 00:15:33.228 Number of Reclaim Groups: 2 00:15:33.228 Number of Recalim Unit Handles: 8 00:15:33.228 Max Placement Identifiers: 128 00:15:33.228 Number of Namespaces Suppprted: 256 00:15:33.228 Reclaim unit Nominal Size: 6000000 bytes 00:15:33.228 Estimated Reclaim Unit Time Limit: Not Reported 00:15:33.228 RUH Desc #000: RUH Type: Initially Isolated 00:15:33.228 RUH Desc #001: RUH Type: Initially Isolated 00:15:33.228 RUH Desc #002: RUH Type: Initially Isolated 00:15:33.228 RUH Desc #003: RUH Type: Initially Isolated 00:15:33.228 RUH Desc #004: RUH Type: Initially Isolated 00:15:33.228 RUH Desc #005: RUH Type: Initially Isolated 00:15:33.228 RUH Desc #006: RUH Type: Initially Isolated 00:15:33.228 RUH Desc #007: RUH Type: Initially Isolated 00:15:33.228 00:15:33.228 FDP reclaim unit handle usage log page 00:15:33.228 ====================================== 00:15:33.228 Number of Reclaim Unit Handles: 8 00:15:33.228 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:33.228 RUH Usage Desc #001: RUH Attributes: Unused 00:15:33.228 RUH Usage Desc #002: RUH Attributes: Unused 00:15:33.228 RUH Usage Desc #003: RUH Attributes: Unused 00:15:33.228 RUH Usage Desc #004: RUH Attributes: Unused 00:15:33.228 RUH Usage Desc #005: RUH Attributes: Unused 00:15:33.228 RUH Usage Desc #006: RUH Attributes: Unused 00:15:33.228 RUH Usage Desc #007: RUH Attributes: Unused 00:15:33.228 00:15:33.228 FDP statistics log page 00:15:33.228 ======================= 00:15:33.228 Host bytes with metadata written: 841498624 00:15:33.228 Media bytes with metadata written: 841666560 00:15:33.228 Media bytes erased: 0 00:15:33.228 00:15:33.228 FDP Reclaim unit handle status 00:15:33.228 ============================== 00:15:33.228 Number of RUHS descriptors: 2 00:15:33.228 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003d7c 00:15:33.228 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:15:33.228 00:15:33.228 FDP write on placement id: 0 success 00:15:33.228 00:15:33.228 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:15:33.228 00:15:33.228 IO mgmt send: RUH update for Placement ID: #0 Success 00:15:33.228 00:15:33.228 Get Feature: FDP Events for Placement handle: #0 00:15:33.228 ======================== 00:15:33.228 Number of FDP Events: 6 00:15:33.228 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:15:33.228 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:15:33.228 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:15:33.228 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:15:33.228 FDP Event: #4 Type: Media Reallocated Enabled: No 00:15:33.228 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:15:33.228 00:15:33.228 FDP events log page 00:15:33.228 =================== 00:15:33.228 Number of FDP events: 1 00:15:33.228 FDP Event #0: 00:15:33.228 Event Type: RU Not Written to Capacity 00:15:33.228 Placement Identifier: Valid 00:15:33.228 NSID: Valid 00:15:33.228 Location: Valid 00:15:33.228 Placement Identifier: 0 00:15:33.228 Event Timestamp: 6 00:15:33.228 Namespace Identifier: 1 00:15:33.228 Reclaim Group Identifier: 0 00:15:33.228 Reclaim Unit Handle Identifier: 0 00:15:33.228 00:15:33.228 FDP test passed 00:15:33.228 00:15:33.228 real 0m0.286s 00:15:33.228 user 0m0.100s 00:15:33.228 sys 0m0.084s 00:15:33.228 18:19:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.228 18:19:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:15:33.228 ************************************ 00:15:33.228 END TEST nvme_flexible_data_placement 00:15:33.228 ************************************ 00:15:33.228 00:15:33.228 real 0m8.810s 00:15:33.228 user 0m1.556s 00:15:33.228 sys 0m2.278s 00:15:33.228 ************************************ 00:15:33.228 END TEST nvme_fdp 00:15:33.228 ************************************ 00:15:33.228 18:19:26 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.228 18:19:26 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:33.228 18:19:26 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:15:33.228 18:19:26 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:33.228 18:19:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:33.228 18:19:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.228 18:19:26 -- common/autotest_common.sh@10 -- # set +x 00:15:33.228 ************************************ 00:15:33.228 START TEST nvme_rpc 00:15:33.228 ************************************ 00:15:33.229 18:19:26 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:33.493 * Looking for test storage... 00:15:33.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.493 18:19:26 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:33.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.493 --rc genhtml_branch_coverage=1 00:15:33.493 --rc genhtml_function_coverage=1 00:15:33.493 --rc genhtml_legend=1 00:15:33.493 --rc geninfo_all_blocks=1 00:15:33.493 --rc geninfo_unexecuted_blocks=1 00:15:33.493 00:15:33.493 ' 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:33.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.493 --rc genhtml_branch_coverage=1 00:15:33.493 --rc genhtml_function_coverage=1 00:15:33.493 --rc genhtml_legend=1 00:15:33.493 --rc geninfo_all_blocks=1 00:15:33.493 --rc geninfo_unexecuted_blocks=1 00:15:33.493 00:15:33.493 ' 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:33.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.493 --rc genhtml_branch_coverage=1 00:15:33.493 --rc genhtml_function_coverage=1 00:15:33.493 --rc genhtml_legend=1 00:15:33.493 --rc geninfo_all_blocks=1 00:15:33.493 --rc geninfo_unexecuted_blocks=1 00:15:33.493 00:15:33.493 ' 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:33.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.493 --rc genhtml_branch_coverage=1 00:15:33.493 --rc genhtml_function_coverage=1 00:15:33.493 --rc genhtml_legend=1 00:15:33.493 --rc geninfo_all_blocks=1 00:15:33.493 --rc geninfo_unexecuted_blocks=1 00:15:33.493 00:15:33.493 ' 00:15:33.493 18:19:26 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.493 18:19:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:33.493 18:19:26 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:33.757 18:19:26 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:33.757 18:19:26 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:33.757 18:19:26 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:15:33.757 18:19:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:15:33.757 18:19:26 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67658 00:15:33.757 18:19:26 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:33.757 18:19:26 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:15:33.757 18:19:26 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67658 00:15:33.757 18:19:26 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67658 ']' 00:15:33.757 18:19:26 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.757 18:19:26 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.757 18:19:26 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.757 18:19:26 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.757 18:19:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.757 [2024-11-26 18:19:26.967418] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:15:33.757 [2024-11-26 18:19:26.967649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67658 ] 00:15:34.016 [2024-11-26 18:19:27.139975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:34.016 [2024-11-26 18:19:27.258206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.016 [2024-11-26 18:19:27.258245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.952 18:19:28 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.952 18:19:28 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:34.952 18:19:28 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:15:35.212 Nvme0n1 00:15:35.212 18:19:28 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:15:35.212 18:19:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:15:35.470 request: 00:15:35.470 { 00:15:35.470 "bdev_name": "Nvme0n1", 00:15:35.470 "filename": "non_existing_file", 00:15:35.470 "method": "bdev_nvme_apply_firmware", 00:15:35.470 "req_id": 1 00:15:35.470 } 00:15:35.470 Got JSON-RPC error response 00:15:35.470 response: 00:15:35.470 { 00:15:35.470 "code": -32603, 00:15:35.470 "message": "open file failed." 00:15:35.470 } 00:15:35.470 18:19:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:15:35.470 18:19:28 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:15:35.470 18:19:28 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:35.728 18:19:28 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:35.728 18:19:28 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67658 00:15:35.728 18:19:28 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67658 ']' 00:15:35.728 18:19:28 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67658 00:15:35.728 18:19:28 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:35.728 18:19:28 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.728 18:19:28 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67658 00:15:35.728 18:19:28 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.728 18:19:28 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.728 18:19:28 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67658' 00:15:35.728 killing process with pid 67658 00:15:35.728 18:19:28 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67658 00:15:35.728 18:19:28 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67658 00:15:38.301 00:15:38.301 real 0m4.605s 00:15:38.301 user 0m8.492s 00:15:38.301 sys 0m0.717s 00:15:38.301 ************************************ 00:15:38.301 END TEST nvme_rpc 00:15:38.301 ************************************ 00:15:38.301 18:19:31 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.301 18:19:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.301 18:19:31 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:38.301 18:19:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:38.301 18:19:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.301 18:19:31 -- common/autotest_common.sh@10 -- # set +x 00:15:38.301 ************************************ 00:15:38.301 START TEST nvme_rpc_timeouts 00:15:38.301 ************************************ 00:15:38.301 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:38.301 * Looking for test storage... 00:15:38.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:38.301 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:38.301 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:15:38.301 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:38.301 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.301 18:19:31 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:15:38.301 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.301 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.301 --rc genhtml_branch_coverage=1 00:15:38.301 --rc genhtml_function_coverage=1 00:15:38.301 --rc genhtml_legend=1 00:15:38.301 --rc geninfo_all_blocks=1 00:15:38.301 --rc geninfo_unexecuted_blocks=1 00:15:38.301 00:15:38.301 ' 00:15:38.301 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.301 --rc genhtml_branch_coverage=1 00:15:38.301 --rc genhtml_function_coverage=1 00:15:38.301 --rc genhtml_legend=1 00:15:38.301 --rc geninfo_all_blocks=1 00:15:38.301 --rc geninfo_unexecuted_blocks=1 00:15:38.301 00:15:38.301 ' 00:15:38.301 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:38.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.301 --rc genhtml_branch_coverage=1 00:15:38.301 --rc genhtml_function_coverage=1 00:15:38.301 --rc genhtml_legend=1 00:15:38.301 --rc geninfo_all_blocks=1 00:15:38.301 --rc geninfo_unexecuted_blocks=1 00:15:38.301 00:15:38.301 ' 00:15:38.301 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:38.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.302 --rc genhtml_branch_coverage=1 00:15:38.302 --rc genhtml_function_coverage=1 00:15:38.302 --rc genhtml_legend=1 00:15:38.302 --rc geninfo_all_blocks=1 00:15:38.302 --rc geninfo_unexecuted_blocks=1 00:15:38.302 00:15:38.302 ' 00:15:38.302 18:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.302 18:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67739 00:15:38.302 18:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67739 00:15:38.302 18:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67777 00:15:38.302 18:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:38.302 18:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:15:38.302 18:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67777 00:15:38.302 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67777 ']' 00:15:38.302 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.302 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.302 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.302 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.302 18:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:38.302 [2024-11-26 18:19:31.571747] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:15:38.302 [2024-11-26 18:19:31.571961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67777 ] 00:15:38.561 [2024-11-26 18:19:31.748511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:38.561 [2024-11-26 18:19:31.868298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.561 [2024-11-26 18:19:31.868337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.497 Checking default timeout settings: 00:15:39.497 18:19:32 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.497 18:19:32 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:15:39.497 18:19:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:15:39.497 18:19:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:39.756 Making settings changes with rpc: 00:15:39.756 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:15:39.756 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:15:40.014 Check default vs. modified settings: 00:15:40.014 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:15:40.014 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67739 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67739 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:40.583 Setting action_on_timeout is changed as expected. 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67739 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67739 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:40.583 Setting timeout_us is changed as expected. 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67739 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67739 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:40.583 Setting timeout_admin_us is changed as expected. 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67739 /tmp/settings_modified_67739 00:15:40.583 18:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67777 00:15:40.583 18:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67777 ']' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67777 00:15:40.583 18:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:15:40.583 18:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67777 00:15:40.583 18:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.583 18:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.583 18:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67777' 00:15:40.583 killing process with pid 67777 00:15:40.583 18:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67777 00:15:40.583 18:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67777 00:15:43.144 18:19:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:15:43.144 RPC TIMEOUT SETTING TEST PASSED. 00:15:43.144 ************************************ 00:15:43.144 END TEST nvme_rpc_timeouts 00:15:43.144 ************************************ 00:15:43.144 00:15:43.144 real 0m4.964s 00:15:43.144 user 0m9.405s 00:15:43.144 sys 0m0.777s 00:15:43.144 18:19:36 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.144 18:19:36 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:43.144 18:19:36 -- spdk/autotest.sh@239 -- # uname -s 00:15:43.144 18:19:36 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:15:43.144 18:19:36 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:43.144 18:19:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:43.144 18:19:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.144 18:19:36 -- common/autotest_common.sh@10 -- # set +x 00:15:43.144 ************************************ 00:15:43.144 START TEST sw_hotplug 00:15:43.144 ************************************ 00:15:43.144 18:19:36 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:43.144 * Looking for test storage... 00:15:43.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:43.144 18:19:36 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:43.144 18:19:36 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:15:43.144 18:19:36 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:43.144 18:19:36 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.144 18:19:36 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:15:43.404 18:19:36 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.405 18:19:36 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:15:43.405 18:19:36 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.405 18:19:36 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:43.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.405 --rc genhtml_branch_coverage=1 00:15:43.405 --rc genhtml_function_coverage=1 00:15:43.405 --rc genhtml_legend=1 00:15:43.405 --rc geninfo_all_blocks=1 00:15:43.405 --rc geninfo_unexecuted_blocks=1 00:15:43.405 00:15:43.405 ' 00:15:43.405 18:19:36 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:43.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.405 --rc genhtml_branch_coverage=1 00:15:43.405 --rc genhtml_function_coverage=1 00:15:43.405 --rc genhtml_legend=1 00:15:43.405 --rc geninfo_all_blocks=1 00:15:43.405 --rc geninfo_unexecuted_blocks=1 00:15:43.405 00:15:43.405 ' 00:15:43.405 18:19:36 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:43.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.405 --rc genhtml_branch_coverage=1 00:15:43.405 --rc genhtml_function_coverage=1 00:15:43.405 --rc genhtml_legend=1 00:15:43.405 --rc geninfo_all_blocks=1 00:15:43.405 --rc geninfo_unexecuted_blocks=1 00:15:43.405 00:15:43.405 ' 00:15:43.405 18:19:36 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:43.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.405 --rc genhtml_branch_coverage=1 00:15:43.405 --rc genhtml_function_coverage=1 00:15:43.405 --rc genhtml_legend=1 00:15:43.405 --rc geninfo_all_blocks=1 00:15:43.405 --rc geninfo_unexecuted_blocks=1 00:15:43.405 00:15:43.405 ' 00:15:43.405 18:19:36 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:43.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:43.922 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:43.922 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:43.922 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:43.922 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:44.181 18:19:37 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:15:44.181 18:19:37 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:15:44.181 18:19:37 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:15:44.181 18:19:37 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@233 -- # local class 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:15:44.181 18:19:37 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:15:44.182 18:19:37 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:44.182 18:19:37 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:15:44.182 18:19:37 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:15:44.182 18:19:37 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:44.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:45.008 Waiting for block devices as requested 00:15:45.009 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:45.009 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:45.267 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:45.267 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:50.539 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:50.539 18:19:43 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:15:50.539 18:19:43 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:50.798 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:15:51.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:51.057 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:15:51.315 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:15:51.574 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:51.574 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:51.832 18:19:44 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:15:51.832 18:19:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68659 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:15:51.832 18:19:45 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:51.832 18:19:45 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:51.832 18:19:45 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:51.832 18:19:45 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:51.832 18:19:45 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:51.832 18:19:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:52.091 Initializing NVMe Controllers 00:15:52.091 Attaching to 0000:00:10.0 00:15:52.091 Attaching to 0000:00:11.0 00:15:52.091 Attached to 0000:00:11.0 00:15:52.091 Attached to 0000:00:10.0 00:15:52.091 Initialization complete. Starting I/O... 00:15:52.091 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:15:52.091 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:15:52.091 00:15:53.030 QEMU NVMe Ctrl (12341 ): 1908 I/Os completed (+1908) 00:15:53.030 QEMU NVMe Ctrl (12340 ): 1908 I/Os completed (+1908) 00:15:53.030 00:15:54.409 QEMU NVMe Ctrl (12341 ): 4379 I/Os completed (+2471) 00:15:54.409 QEMU NVMe Ctrl (12340 ): 4389 I/Os completed (+2481) 00:15:54.409 00:15:55.346 QEMU NVMe Ctrl (12341 ): 6827 I/Os completed (+2448) 00:15:55.346 QEMU NVMe Ctrl (12340 ): 6877 I/Os completed (+2488) 00:15:55.346 00:15:56.283 QEMU NVMe Ctrl (12341 ): 9219 I/Os completed (+2392) 00:15:56.283 QEMU NVMe Ctrl (12340 ): 9290 I/Os completed (+2413) 00:15:56.283 00:15:57.221 QEMU NVMe Ctrl (12341 ): 11715 I/Os completed (+2496) 00:15:57.221 QEMU NVMe Ctrl (12340 ): 11793 I/Os completed (+2503) 00:15:57.221 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:58.161 [2024-11-26 18:19:51.133367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:58.161 Controller removed: QEMU NVMe Ctrl (12340 ) 00:15:58.161 [2024-11-26 18:19:51.134849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.134963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.135018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.135055] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:58.161 [2024-11-26 18:19:51.137483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.137581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.137646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.137689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:58.161 [2024-11-26 18:19:51.169885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:58.161 Controller removed: QEMU NVMe Ctrl (12341 ) 00:15:58.161 [2024-11-26 18:19:51.171324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.171417] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.171477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.171521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:58.161 [2024-11-26 18:19:51.173799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.173872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.173914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 [2024-11-26 18:19:51.173945] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:58.161 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:15:58.161 EAL: Scan for (pci) bus failed. 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:58.161 Attaching to 0000:00:10.0 00:15:58.161 Attached to 0000:00:10.0 00:15:58.161 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:15:58.161 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:58.161 18:19:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:58.161 Attaching to 0000:00:11.0 00:15:58.161 Attached to 0000:00:11.0 00:15:59.098 QEMU NVMe Ctrl (12340 ): 2404 I/Os completed (+2404) 00:15:59.098 QEMU NVMe Ctrl (12341 ): 2189 I/Os completed (+2189) 00:15:59.098 00:16:00.036 QEMU NVMe Ctrl (12340 ): 4956 I/Os completed (+2552) 00:16:00.036 QEMU NVMe Ctrl (12341 ): 4750 I/Os completed (+2561) 00:16:00.036 00:16:01.413 QEMU NVMe Ctrl (12340 ): 7584 I/Os completed (+2628) 00:16:01.413 QEMU NVMe Ctrl (12341 ): 7378 I/Os completed (+2628) 00:16:01.413 00:16:02.355 QEMU NVMe Ctrl (12340 ): 10240 I/Os completed (+2656) 00:16:02.355 QEMU NVMe Ctrl (12341 ): 10044 I/Os completed (+2666) 00:16:02.355 00:16:03.297 QEMU NVMe Ctrl (12340 ): 12944 I/Os completed (+2704) 00:16:03.297 QEMU NVMe Ctrl (12341 ): 12748 I/Os completed (+2704) 00:16:03.298 00:16:04.235 QEMU NVMe Ctrl (12340 ): 15472 I/Os completed (+2528) 00:16:04.235 QEMU NVMe Ctrl (12341 ): 15330 I/Os completed (+2582) 00:16:04.235 00:16:05.171 QEMU NVMe Ctrl (12340 ): 17984 I/Os completed (+2512) 00:16:05.171 QEMU NVMe Ctrl (12341 ): 17853 I/Os completed (+2523) 00:16:05.171 00:16:06.107 QEMU NVMe Ctrl (12340 ): 20620 I/Os completed (+2636) 00:16:06.107 QEMU NVMe Ctrl (12341 ): 20491 I/Os completed (+2638) 00:16:06.107 00:16:07.044 QEMU NVMe Ctrl (12340 ): 23184 I/Os completed (+2564) 00:16:07.044 QEMU NVMe Ctrl (12341 ): 23074 I/Os completed (+2583) 00:16:07.044 00:16:08.425 QEMU NVMe Ctrl (12340 ): 25752 I/Os completed (+2568) 00:16:08.425 QEMU NVMe Ctrl (12341 ): 25656 I/Os completed (+2582) 00:16:08.425 00:16:09.362 QEMU NVMe Ctrl (12340 ): 28326 I/Os completed (+2574) 00:16:09.362 QEMU NVMe Ctrl (12341 ): 28248 I/Os completed (+2592) 00:16:09.362 00:16:10.315 QEMU NVMe Ctrl (12340 ): 31017 I/Os completed (+2691) 00:16:10.315 QEMU NVMe Ctrl (12341 ): 30934 I/Os completed (+2686) 00:16:10.315 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:10.315 [2024-11-26 18:20:03.435268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:10.315 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:10.315 [2024-11-26 18:20:03.436496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.436550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.436569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.436586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:10.315 [2024-11-26 18:20:03.438892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.438935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.438949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.438966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:10.315 [2024-11-26 18:20:03.465845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:10.315 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:10.315 [2024-11-26 18:20:03.467031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.467078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.467100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.467119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:10.315 [2024-11-26 18:20:03.469173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.469211] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.469228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 [2024-11-26 18:20:03.469243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:10.315 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:10.315 Attaching to 0000:00:10.0 00:16:10.315 Attached to 0000:00:10.0 00:16:10.573 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:10.573 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:10.573 18:20:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:10.573 Attaching to 0000:00:11.0 00:16:10.573 Attached to 0000:00:11.0 00:16:11.139 QEMU NVMe Ctrl (12340 ): 1784 I/Os completed (+1784) 00:16:11.139 QEMU NVMe Ctrl (12341 ): 1528 I/Os completed (+1528) 00:16:11.139 00:16:12.074 QEMU NVMe Ctrl (12340 ): 4333 I/Os completed (+2549) 00:16:12.074 QEMU NVMe Ctrl (12341 ): 4091 I/Os completed (+2563) 00:16:12.074 00:16:13.009 QEMU NVMe Ctrl (12340 ): 6925 I/Os completed (+2592) 00:16:13.009 QEMU NVMe Ctrl (12341 ): 6683 I/Os completed (+2592) 00:16:13.009 00:16:14.388 QEMU NVMe Ctrl (12340 ): 9565 I/Os completed (+2640) 00:16:14.388 QEMU NVMe Ctrl (12341 ): 9326 I/Os completed (+2643) 00:16:14.388 00:16:15.324 QEMU NVMe Ctrl (12340 ): 12181 I/Os completed (+2616) 00:16:15.324 QEMU NVMe Ctrl (12341 ): 11947 I/Os completed (+2621) 00:16:15.324 00:16:16.261 QEMU NVMe Ctrl (12340 ): 14829 I/Os completed (+2648) 00:16:16.261 QEMU NVMe Ctrl (12341 ): 14595 I/Os completed (+2648) 00:16:16.261 00:16:17.205 QEMU NVMe Ctrl (12340 ): 17421 I/Os completed (+2592) 00:16:17.205 QEMU NVMe Ctrl (12341 ): 17191 I/Os completed (+2596) 00:16:17.205 00:16:18.165 QEMU NVMe Ctrl (12340 ): 20044 I/Os completed (+2623) 00:16:18.165 QEMU NVMe Ctrl (12341 ): 19820 I/Os completed (+2629) 00:16:18.165 00:16:19.102 QEMU NVMe Ctrl (12340 ): 22704 I/Os completed (+2660) 00:16:19.102 QEMU NVMe Ctrl (12341 ): 22535 I/Os completed (+2715) 00:16:19.102 00:16:20.038 QEMU NVMe Ctrl (12340 ): 25311 I/Os completed (+2607) 00:16:20.038 QEMU NVMe Ctrl (12341 ): 25245 I/Os completed (+2710) 00:16:20.038 00:16:20.975 QEMU NVMe Ctrl (12340 ): 28006 I/Os completed (+2695) 00:16:20.975 QEMU NVMe Ctrl (12341 ): 27986 I/Os completed (+2741) 00:16:20.975 00:16:22.351 QEMU NVMe Ctrl (12340 ): 30572 I/Os completed (+2566) 00:16:22.351 QEMU NVMe Ctrl (12341 ): 30582 I/Os completed (+2596) 00:16:22.351 00:16:22.610 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:22.610 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:22.610 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:22.610 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:22.610 [2024-11-26 18:20:15.736548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:22.611 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:22.611 [2024-11-26 18:20:15.737879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.737938] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.737961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.737983] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:22.611 [2024-11-26 18:20:15.740220] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.740271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.740287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.740303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:22.611 [2024-11-26 18:20:15.770436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:22.611 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:22.611 [2024-11-26 18:20:15.771684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.771735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.771755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.771772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:22.611 [2024-11-26 18:20:15.773843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.773885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.773903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 [2024-11-26 18:20:15.773917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:22.611 18:20:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:22.611 Attaching to 0000:00:10.0 00:16:22.611 Attached to 0000:00:10.0 00:16:22.870 18:20:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:22.870 18:20:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:22.870 18:20:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:22.870 Attaching to 0000:00:11.0 00:16:22.870 Attached to 0000:00:11.0 00:16:22.870 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:22.870 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:22.870 [2024-11-26 18:20:16.034081] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:16:35.082 18:20:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:35.082 18:20:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:35.082 18:20:28 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.90 00:16:35.082 18:20:28 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.90 00:16:35.082 18:20:28 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:35.082 18:20:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.90 00:16:35.082 18:20:28 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.90 2 00:16:35.082 remove_attach_helper took 42.90s to complete (handling 2 nvme drive(s)) 18:20:28 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:16:41.650 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68659 00:16:41.650 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68659) - No such process 00:16:41.650 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68659 00:16:41.650 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:16:41.650 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:16:41.650 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:16:41.650 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69197 00:16:41.650 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.650 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:16:41.650 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69197 00:16:41.650 18:20:34 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69197 ']' 00:16:41.650 18:20:34 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.650 18:20:34 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.650 18:20:34 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.650 18:20:34 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.650 18:20:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:41.650 [2024-11-26 18:20:34.146299] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:16:41.650 [2024-11-26 18:20:34.146493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69197 ] 00:16:41.650 [2024-11-26 18:20:34.319850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.650 [2024-11-26 18:20:34.454932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.218 18:20:35 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.218 18:20:35 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:16:42.218 18:20:35 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:42.218 18:20:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.218 18:20:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:42.218 18:20:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.218 18:20:35 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:16:42.218 18:20:35 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:42.218 18:20:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:42.218 18:20:35 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:42.218 18:20:35 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:42.218 18:20:35 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:42.218 18:20:35 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:42.218 18:20:35 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:16:42.218 18:20:35 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:42.218 18:20:35 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:42.218 18:20:35 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:42.218 18:20:35 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:42.218 18:20:35 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:48.787 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:48.787 18:20:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.787 18:20:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:48.787 [2024-11-26 18:20:41.548516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:48.787 [2024-11-26 18:20:41.550638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.787 [2024-11-26 18:20:41.550725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.787 [2024-11-26 18:20:41.550776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.788 [2024-11-26 18:20:41.550829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.788 [2024-11-26 18:20:41.550862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.788 [2024-11-26 18:20:41.550933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.788 [2024-11-26 18:20:41.550980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.788 [2024-11-26 18:20:41.551057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.788 [2024-11-26 18:20:41.551102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.788 [2024-11-26 18:20:41.551148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.788 [2024-11-26 18:20:41.551188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.788 [2024-11-26 18:20:41.551231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.788 18:20:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.788 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:48.788 18:20:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:48.788 [2024-11-26 18:20:42.047567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:48.788 [2024-11-26 18:20:42.049706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.788 [2024-11-26 18:20:42.049792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.788 [2024-11-26 18:20:42.049841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.788 [2024-11-26 18:20:42.049884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.788 [2024-11-26 18:20:42.049909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.788 [2024-11-26 18:20:42.049941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.788 [2024-11-26 18:20:42.049973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.788 [2024-11-26 18:20:42.049996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.788 [2024-11-26 18:20:42.050154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.788 [2024-11-26 18:20:42.050204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:48.788 [2024-11-26 18:20:42.050252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:48.788 [2024-11-26 18:20:42.050295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:48.788 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:48.788 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:48.788 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:48.788 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:48.788 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:48.788 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:48.788 18:20:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.788 18:20:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:48.788 18:20:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.048 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:49.048 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:49.048 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:49.048 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:49.048 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:49.048 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:49.048 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:49.048 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:49.048 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:49.048 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:49.307 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:49.307 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:49.307 18:20:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:01.547 18:20:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.547 18:20:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:01.547 18:20:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:01.547 [2024-11-26 18:20:54.523581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:01.547 [2024-11-26 18:20:54.525516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:01.547 [2024-11-26 18:20:54.525557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.547 [2024-11-26 18:20:54.525571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.547 [2024-11-26 18:20:54.525593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:01.547 [2024-11-26 18:20:54.525603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.547 [2024-11-26 18:20:54.525625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.547 [2024-11-26 18:20:54.525636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:01.547 [2024-11-26 18:20:54.525646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.547 [2024-11-26 18:20:54.525656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.547 [2024-11-26 18:20:54.525667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:01.547 [2024-11-26 18:20:54.525675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.547 [2024-11-26 18:20:54.525685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:01.547 18:20:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.547 18:20:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:01.547 18:20:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:01.547 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:01.806 [2024-11-26 18:20:54.922800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:01.806 [2024-11-26 18:20:54.924544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:01.806 [2024-11-26 18:20:54.924583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.806 [2024-11-26 18:20:54.924601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.806 [2024-11-26 18:20:54.924633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:01.806 [2024-11-26 18:20:54.924645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.806 [2024-11-26 18:20:54.924654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.806 [2024-11-26 18:20:54.924666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:01.806 [2024-11-26 18:20:54.924675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.806 [2024-11-26 18:20:54.924684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.806 [2024-11-26 18:20:54.924693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:01.806 [2024-11-26 18:20:54.924704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.806 [2024-11-26 18:20:54.924713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.806 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:01.806 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:01.806 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:01.806 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:01.806 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:01.806 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:01.806 18:20:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.806 18:20:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:01.806 18:20:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.806 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:01.806 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:02.065 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:02.065 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:02.065 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:02.065 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:02.065 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:02.065 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:02.065 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:02.065 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:02.323 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:02.323 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:02.323 18:20:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:14.546 18:21:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.546 18:21:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:14.546 18:21:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:14.546 [2024-11-26 18:21:07.498705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:14.546 [2024-11-26 18:21:07.501028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:14.546 [2024-11-26 18:21:07.501075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.546 [2024-11-26 18:21:07.501090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.546 [2024-11-26 18:21:07.501113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:14.546 [2024-11-26 18:21:07.501123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.546 [2024-11-26 18:21:07.501136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.546 [2024-11-26 18:21:07.501147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:14.546 [2024-11-26 18:21:07.501158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.546 [2024-11-26 18:21:07.501167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.546 [2024-11-26 18:21:07.501178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:14.546 [2024-11-26 18:21:07.501186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.546 [2024-11-26 18:21:07.501196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:14.546 18:21:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.546 18:21:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:14.546 18:21:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:14.546 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:14.805 [2024-11-26 18:21:07.897939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:14.805 [2024-11-26 18:21:07.899830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:14.805 [2024-11-26 18:21:07.899869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.805 [2024-11-26 18:21:07.899901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.805 [2024-11-26 18:21:07.899920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:14.805 [2024-11-26 18:21:07.899931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.805 [2024-11-26 18:21:07.899940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.805 [2024-11-26 18:21:07.899953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:14.805 [2024-11-26 18:21:07.899962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.805 [2024-11-26 18:21:07.899976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.805 [2024-11-26 18:21:07.899987] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:14.805 [2024-11-26 18:21:07.899998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.805 [2024-11-26 18:21:07.900008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.805 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:14.805 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:14.805 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:14.805 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:14.805 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:14.805 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:14.805 18:21:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.805 18:21:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:14.805 18:21:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.805 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:14.805 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:15.063 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:15.063 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:15.063 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:15.063 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:15.063 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:15.063 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:15.063 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:15.063 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:15.063 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:15.322 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:15.322 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.98 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.98 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.98 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.98 2 00:17:27.521 remove_attach_helper took 44.98s to complete (handling 2 nvme drive(s)) 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:17:27.521 18:21:20 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:27.521 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:34.081 18:21:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.081 18:21:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:34.081 [2024-11-26 18:21:26.564659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:34.081 [2024-11-26 18:21:26.567044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:34.081 [2024-11-26 18:21:26.567104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.081 [2024-11-26 18:21:26.567123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.081 [2024-11-26 18:21:26.567151] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:34.081 [2024-11-26 18:21:26.567163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.081 [2024-11-26 18:21:26.567177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.081 [2024-11-26 18:21:26.567188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:34.081 [2024-11-26 18:21:26.567201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.081 [2024-11-26 18:21:26.567212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.081 [2024-11-26 18:21:26.567229] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:34.081 [2024-11-26 18:21:26.567240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.081 [2024-11-26 18:21:26.567256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.081 18:21:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:34.081 18:21:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:34.081 [2024-11-26 18:21:26.963879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:34.081 [2024-11-26 18:21:26.968027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:34.081 [2024-11-26 18:21:26.968082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.081 [2024-11-26 18:21:26.968099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.081 [2024-11-26 18:21:26.968120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:34.081 [2024-11-26 18:21:26.968132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.081 [2024-11-26 18:21:26.968141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.081 [2024-11-26 18:21:26.968172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:34.081 [2024-11-26 18:21:26.968181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.081 [2024-11-26 18:21:26.968193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.081 [2024-11-26 18:21:26.968204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:34.081 [2024-11-26 18:21:26.968217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:34.081 [2024-11-26 18:21:26.968227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:34.081 18:21:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.081 18:21:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:34.081 18:21:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:34.081 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:34.339 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:34.339 18:21:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:46.553 18:21:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.553 18:21:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:46.553 18:21:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:46.553 18:21:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.553 18:21:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:46.553 [2024-11-26 18:21:39.539784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:46.553 [2024-11-26 18:21:39.542504] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.553 [2024-11-26 18:21:39.542560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.553 [2024-11-26 18:21:39.542579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.553 [2024-11-26 18:21:39.542614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.553 [2024-11-26 18:21:39.542648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.553 [2024-11-26 18:21:39.542661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.553 [2024-11-26 18:21:39.542674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.553 [2024-11-26 18:21:39.542687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.553 [2024-11-26 18:21:39.542697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.553 [2024-11-26 18:21:39.542726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.553 [2024-11-26 18:21:39.542737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.553 [2024-11-26 18:21:39.542761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.553 18:21:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.553 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:46.554 18:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:46.812 [2024-11-26 18:21:39.939052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:46.812 [2024-11-26 18:21:39.940956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.812 [2024-11-26 18:21:39.941001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.812 [2024-11-26 18:21:39.941021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.812 [2024-11-26 18:21:39.941048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.812 [2024-11-26 18:21:39.941066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.812 [2024-11-26 18:21:39.941075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.812 [2024-11-26 18:21:39.941087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.812 [2024-11-26 18:21:39.941096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.812 [2024-11-26 18:21:39.941108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.812 [2024-11-26 18:21:39.941117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:46.812 [2024-11-26 18:21:39.941130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.812 [2024-11-26 18:21:39.941139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.812 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:46.812 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:46.813 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:46.813 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:46.813 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:46.813 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:46.813 18:21:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.813 18:21:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:46.813 18:21:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.813 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:46.813 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:47.072 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:59.296 18:21:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.296 18:21:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:59.296 18:21:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:59.296 18:21:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.296 18:21:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:59.296 18:21:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.296 [2024-11-26 18:21:52.514945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:59.296 [2024-11-26 18:21:52.516836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.296 [2024-11-26 18:21:52.516888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.296 [2024-11-26 18:21:52.516909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.296 [2024-11-26 18:21:52.516935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.296 [2024-11-26 18:21:52.516946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.296 [2024-11-26 18:21:52.516957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.296 [2024-11-26 18:21:52.516966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.296 [2024-11-26 18:21:52.516979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.296 [2024-11-26 18:21:52.516988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.296 [2024-11-26 18:21:52.517000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.296 [2024-11-26 18:21:52.517009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.296 [2024-11-26 18:21:52.517020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:59.296 18:21:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:59.865 [2024-11-26 18:21:53.014003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:59.865 [2024-11-26 18:21:53.015421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.865 [2024-11-26 18:21:53.015458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.865 [2024-11-26 18:21:53.015474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.865 [2024-11-26 18:21:53.015495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.865 [2024-11-26 18:21:53.015507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.865 [2024-11-26 18:21:53.015516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.865 [2024-11-26 18:21:53.015529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.865 [2024-11-26 18:21:53.015537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.865 [2024-11-26 18:21:53.015548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.865 [2024-11-26 18:21:53.015557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:59.865 [2024-11-26 18:21:53.015573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.865 [2024-11-26 18:21:53.015582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:59.865 18:21:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.865 18:21:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:59.865 18:21:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:59.865 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:00.124 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:00.124 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:00.124 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:00.124 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:00.124 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:00.124 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:00.124 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:00.124 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.91 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.91 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.91 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.91 2 00:18:12.341 remove_attach_helper took 44.91s to complete (handling 2 nvme drive(s)) 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:18:12.341 18:22:05 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69197 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69197 ']' 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69197 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69197 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.341 killing process with pid 69197 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69197' 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69197 00:18:12.341 18:22:05 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69197 00:18:14.882 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:14.882 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:15.450 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:15.451 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:15.709 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:15.709 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:15.709 00:18:15.709 real 2m32.656s 00:18:15.709 user 1m52.699s 00:18:15.709 sys 0m19.646s 00:18:15.709 18:22:08 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.709 18:22:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:15.709 ************************************ 00:18:15.709 END TEST sw_hotplug 00:18:15.709 ************************************ 00:18:15.709 18:22:08 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:18:15.709 18:22:08 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:15.709 18:22:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:15.709 18:22:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.709 18:22:08 -- common/autotest_common.sh@10 -- # set +x 00:18:15.709 ************************************ 00:18:15.709 START TEST nvme_xnvme 00:18:15.709 ************************************ 00:18:15.709 18:22:08 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:15.969 * Looking for test storage... 00:18:15.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:15.969 18:22:09 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:15.969 18:22:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:15.969 18:22:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:15.969 18:22:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:15.969 18:22:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:15.970 18:22:09 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:15.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.970 --rc genhtml_branch_coverage=1 00:18:15.970 --rc genhtml_function_coverage=1 00:18:15.970 --rc genhtml_legend=1 00:18:15.970 --rc geninfo_all_blocks=1 00:18:15.970 --rc geninfo_unexecuted_blocks=1 00:18:15.970 00:18:15.970 ' 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:15.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.970 --rc genhtml_branch_coverage=1 00:18:15.970 --rc genhtml_function_coverage=1 00:18:15.970 --rc genhtml_legend=1 00:18:15.970 --rc geninfo_all_blocks=1 00:18:15.970 --rc geninfo_unexecuted_blocks=1 00:18:15.970 00:18:15.970 ' 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:15.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.970 --rc genhtml_branch_coverage=1 00:18:15.970 --rc genhtml_function_coverage=1 00:18:15.970 --rc genhtml_legend=1 00:18:15.970 --rc geninfo_all_blocks=1 00:18:15.970 --rc geninfo_unexecuted_blocks=1 00:18:15.970 00:18:15.970 ' 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:15.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.970 --rc genhtml_branch_coverage=1 00:18:15.970 --rc genhtml_function_coverage=1 00:18:15.970 --rc genhtml_legend=1 00:18:15.970 --rc geninfo_all_blocks=1 00:18:15.970 --rc geninfo_unexecuted_blocks=1 00:18:15.970 00:18:15.970 ' 00:18:15.970 18:22:09 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:18:15.970 18:22:09 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:15.970 18:22:09 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:18:15.970 18:22:09 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:18:15.971 18:22:09 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:15.971 18:22:09 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:18:15.971 18:22:09 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:18:15.971 18:22:09 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:18:15.971 18:22:09 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:15.971 #define SPDK_CONFIG_H 00:18:15.971 #define SPDK_CONFIG_AIO_FSDEV 1 00:18:15.971 #define SPDK_CONFIG_APPS 1 00:18:15.971 #define SPDK_CONFIG_ARCH native 00:18:15.971 #define SPDK_CONFIG_ASAN 1 00:18:15.971 #undef SPDK_CONFIG_AVAHI 00:18:15.971 #undef SPDK_CONFIG_CET 00:18:15.971 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:18:15.971 #define SPDK_CONFIG_COVERAGE 1 00:18:15.971 #define SPDK_CONFIG_CROSS_PREFIX 00:18:15.971 #undef SPDK_CONFIG_CRYPTO 00:18:15.971 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:15.971 #undef SPDK_CONFIG_CUSTOMOCF 00:18:15.971 #undef SPDK_CONFIG_DAOS 00:18:15.971 #define SPDK_CONFIG_DAOS_DIR 00:18:15.971 #define SPDK_CONFIG_DEBUG 1 00:18:15.971 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:15.971 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:15.971 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:15.971 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:15.971 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:15.971 #undef SPDK_CONFIG_DPDK_UADK 00:18:15.971 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:15.971 #define SPDK_CONFIG_EXAMPLES 1 00:18:15.971 #undef SPDK_CONFIG_FC 00:18:15.971 #define SPDK_CONFIG_FC_PATH 00:18:15.971 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:15.971 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:15.971 #define SPDK_CONFIG_FSDEV 1 00:18:15.971 #undef SPDK_CONFIG_FUSE 00:18:15.971 #undef SPDK_CONFIG_FUZZER 00:18:15.971 #define SPDK_CONFIG_FUZZER_LIB 00:18:15.971 #undef SPDK_CONFIG_GOLANG 00:18:15.971 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:15.971 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:15.971 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:15.971 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:18:15.971 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:15.971 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:15.971 #undef SPDK_CONFIG_HAVE_LZ4 00:18:15.971 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:18:15.971 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:18:15.971 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:15.971 #define SPDK_CONFIG_IDXD 1 00:18:15.971 #define SPDK_CONFIG_IDXD_KERNEL 1 00:18:15.971 #undef SPDK_CONFIG_IPSEC_MB 00:18:15.971 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:15.971 #define SPDK_CONFIG_ISAL 1 00:18:15.971 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:15.971 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:15.971 #define SPDK_CONFIG_LIBDIR 00:18:15.971 #undef SPDK_CONFIG_LTO 00:18:15.971 #define SPDK_CONFIG_MAX_LCORES 128 00:18:15.971 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:18:15.971 #define SPDK_CONFIG_NVME_CUSE 1 00:18:15.971 #undef SPDK_CONFIG_OCF 00:18:15.971 #define SPDK_CONFIG_OCF_PATH 00:18:15.971 #define SPDK_CONFIG_OPENSSL_PATH 00:18:15.971 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:15.971 #define SPDK_CONFIG_PGO_DIR 00:18:15.971 #undef SPDK_CONFIG_PGO_USE 00:18:15.971 #define SPDK_CONFIG_PREFIX /usr/local 00:18:15.971 #undef SPDK_CONFIG_RAID5F 00:18:15.971 #undef SPDK_CONFIG_RBD 00:18:15.971 #define SPDK_CONFIG_RDMA 1 00:18:15.971 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:15.971 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:15.971 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:15.971 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:15.971 #define SPDK_CONFIG_SHARED 1 00:18:15.971 #undef SPDK_CONFIG_SMA 00:18:15.971 #define SPDK_CONFIG_TESTS 1 00:18:15.971 #undef SPDK_CONFIG_TSAN 00:18:15.971 #define SPDK_CONFIG_UBLK 1 00:18:15.971 #define SPDK_CONFIG_UBSAN 1 00:18:15.971 #undef SPDK_CONFIG_UNIT_TESTS 00:18:15.971 #undef SPDK_CONFIG_URING 00:18:15.971 #define SPDK_CONFIG_URING_PATH 00:18:15.971 #undef SPDK_CONFIG_URING_ZNS 00:18:15.971 #undef SPDK_CONFIG_USDT 00:18:15.971 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:15.971 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:15.971 #undef SPDK_CONFIG_VFIO_USER 00:18:15.971 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:15.971 #define SPDK_CONFIG_VHOST 1 00:18:15.971 #define SPDK_CONFIG_VIRTIO 1 00:18:15.971 #undef SPDK_CONFIG_VTUNE 00:18:15.971 #define SPDK_CONFIG_VTUNE_DIR 00:18:15.971 #define SPDK_CONFIG_WERROR 1 00:18:15.971 #define SPDK_CONFIG_WPDK_DIR 00:18:15.971 #define SPDK_CONFIG_XNVME 1 00:18:15.971 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:15.971 18:22:09 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:15.971 18:22:09 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.971 18:22:09 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:15.971 18:22:09 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.971 18:22:09 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.971 18:22:09 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.971 18:22:09 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.971 18:22:09 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.971 18:22:09 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.971 18:22:09 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:15.971 18:22:09 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.971 18:22:09 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@68 -- # uname -s 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:18:15.971 18:22:09 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:18:15.971 18:22:09 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:18:15.971 18:22:09 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:18:15.971 18:22:09 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:18:15.972 18:22:09 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:18:16.233 18:22:09 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70546 ]] 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70546 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.LeNice 00:18:16.234 18:22:09 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.LeNice/tests/xnvme /tmp/spdk.LeNice 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976985600 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5590802432 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976985600 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5590802432 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97341935616 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2360844288 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:18:16.235 * Looking for test storage... 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976985600 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:16.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:16.235 18:22:09 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.235 18:22:09 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.236 18:22:09 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:16.236 18:22:09 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.236 18:22:09 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.236 --rc genhtml_branch_coverage=1 00:18:16.236 --rc genhtml_function_coverage=1 00:18:16.236 --rc genhtml_legend=1 00:18:16.236 --rc geninfo_all_blocks=1 00:18:16.236 --rc geninfo_unexecuted_blocks=1 00:18:16.236 00:18:16.236 ' 00:18:16.236 18:22:09 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.236 --rc genhtml_branch_coverage=1 00:18:16.236 --rc genhtml_function_coverage=1 00:18:16.236 --rc genhtml_legend=1 00:18:16.236 --rc geninfo_all_blocks=1 00:18:16.236 --rc geninfo_unexecuted_blocks=1 00:18:16.236 00:18:16.236 ' 00:18:16.236 18:22:09 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.236 --rc genhtml_branch_coverage=1 00:18:16.236 --rc genhtml_function_coverage=1 00:18:16.236 --rc genhtml_legend=1 00:18:16.236 --rc geninfo_all_blocks=1 00:18:16.236 --rc geninfo_unexecuted_blocks=1 00:18:16.236 00:18:16.236 ' 00:18:16.236 18:22:09 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.236 --rc genhtml_branch_coverage=1 00:18:16.236 --rc genhtml_function_coverage=1 00:18:16.236 --rc genhtml_legend=1 00:18:16.236 --rc geninfo_all_blocks=1 00:18:16.236 --rc geninfo_unexecuted_blocks=1 00:18:16.236 00:18:16.236 ' 00:18:16.236 18:22:09 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.236 18:22:09 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.236 18:22:09 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.236 18:22:09 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.236 18:22:09 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.236 18:22:09 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.236 18:22:09 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.236 18:22:09 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.236 18:22:09 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:16.236 18:22:09 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:18:16.236 18:22:09 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:16.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:17.061 Waiting for block devices as requested 00:18:17.061 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:17.320 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:17.320 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:17.320 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:22.593 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:22.593 18:22:15 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:18:22.852 18:22:16 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:18:22.852 18:22:16 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:18:23.111 18:22:16 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:18:23.111 18:22:16 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:18:23.111 No valid GPT data, bailing 00:18:23.111 18:22:16 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:23.111 18:22:16 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:18:23.111 18:22:16 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:23.111 18:22:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:23.111 18:22:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:23.111 18:22:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.111 18:22:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.111 ************************************ 00:18:23.111 START TEST xnvme_rpc 00:18:23.111 ************************************ 00:18:23.111 18:22:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:23.111 18:22:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:23.111 18:22:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:23.111 18:22:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:23.111 18:22:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:23.111 18:22:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70937 00:18:23.111 18:22:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.111 18:22:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70937 00:18:23.111 18:22:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70937 ']' 00:18:23.111 18:22:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.112 18:22:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.112 18:22:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.112 18:22:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.112 18:22:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.371 [2024-11-26 18:22:16.449819] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:18:23.371 [2024-11-26 18:22:16.450028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70937 ] 00:18:23.371 [2024-11-26 18:22:16.627584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.629 [2024-11-26 18:22:16.744292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.568 xnvme_bdev 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70937 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70937 ']' 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70937 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70937 00:18:24.568 killing process with pid 70937 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70937' 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70937 00:18:24.568 18:22:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70937 00:18:27.105 00:18:27.105 real 0m3.911s 00:18:27.105 user 0m4.072s 00:18:27.105 sys 0m0.478s 00:18:27.105 18:22:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.105 18:22:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.105 ************************************ 00:18:27.105 END TEST xnvme_rpc 00:18:27.105 ************************************ 00:18:27.105 18:22:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:27.105 18:22:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:27.105 18:22:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.105 18:22:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:27.106 ************************************ 00:18:27.106 START TEST xnvme_bdevperf 00:18:27.106 ************************************ 00:18:27.106 18:22:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:27.106 18:22:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:27.106 18:22:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:18:27.106 18:22:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:27.106 18:22:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:27.106 18:22:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:27.106 18:22:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:27.106 18:22:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:27.106 { 00:18:27.106 "subsystems": [ 00:18:27.106 { 00:18:27.106 "subsystem": "bdev", 00:18:27.106 "config": [ 00:18:27.106 { 00:18:27.106 "params": { 00:18:27.106 "io_mechanism": "libaio", 00:18:27.106 "conserve_cpu": false, 00:18:27.106 "filename": "/dev/nvme0n1", 00:18:27.106 "name": "xnvme_bdev" 00:18:27.106 }, 00:18:27.106 "method": "bdev_xnvme_create" 00:18:27.106 }, 00:18:27.106 { 00:18:27.106 "method": "bdev_wait_for_examine" 00:18:27.106 } 00:18:27.106 ] 00:18:27.106 } 00:18:27.106 ] 00:18:27.106 } 00:18:27.106 [2024-11-26 18:22:20.413648] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:18:27.106 [2024-11-26 18:22:20.413833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71022 ] 00:18:27.365 [2024-11-26 18:22:20.587555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.365 [2024-11-26 18:22:20.696061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.933 Running I/O for 5 seconds... 00:18:29.819 40160.00 IOPS, 156.88 MiB/s [2024-11-26T18:22:24.094Z] 40078.50 IOPS, 156.56 MiB/s [2024-11-26T18:22:25.473Z] 39829.00 IOPS, 155.58 MiB/s [2024-11-26T18:22:26.042Z] 39307.00 IOPS, 153.54 MiB/s 00:18:32.707 Latency(us) 00:18:32.707 [2024-11-26T18:22:26.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.707 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:32.707 xnvme_bdev : 5.00 38997.10 152.33 0.00 0.00 1637.81 193.17 4893.74 00:18:32.707 [2024-11-26T18:22:26.042Z] =================================================================================================================== 00:18:32.707 [2024-11-26T18:22:26.042Z] Total : 38997.10 152.33 0.00 0.00 1637.81 193.17 4893.74 00:18:34.088 18:22:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:34.088 18:22:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:34.088 18:22:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:34.088 18:22:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:34.088 18:22:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:34.088 { 00:18:34.088 "subsystems": [ 00:18:34.088 { 00:18:34.088 "subsystem": "bdev", 00:18:34.088 "config": [ 00:18:34.088 { 00:18:34.088 "params": { 00:18:34.088 "io_mechanism": "libaio", 00:18:34.088 "conserve_cpu": false, 00:18:34.088 "filename": "/dev/nvme0n1", 00:18:34.088 "name": "xnvme_bdev" 00:18:34.088 }, 00:18:34.088 "method": "bdev_xnvme_create" 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "method": "bdev_wait_for_examine" 00:18:34.088 } 00:18:34.088 ] 00:18:34.088 } 00:18:34.088 ] 00:18:34.088 } 00:18:34.089 [2024-11-26 18:22:27.289882] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:18:34.089 [2024-11-26 18:22:27.290080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71097 ] 00:18:34.348 [2024-11-26 18:22:27.465722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.349 [2024-11-26 18:22:27.605224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.918 Running I/O for 5 seconds... 00:18:36.794 58891.00 IOPS, 230.04 MiB/s [2024-11-26T18:22:31.083Z] 61023.50 IOPS, 238.37 MiB/s [2024-11-26T18:22:32.462Z] 62765.00 IOPS, 245.18 MiB/s [2024-11-26T18:22:33.396Z] 63677.00 IOPS, 248.74 MiB/s 00:18:40.061 Latency(us) 00:18:40.061 [2024-11-26T18:22:33.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.061 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:40.061 xnvme_bdev : 5.00 62648.57 244.72 0.00 0.00 1018.82 237.89 7068.73 00:18:40.061 [2024-11-26T18:22:33.396Z] =================================================================================================================== 00:18:40.061 [2024-11-26T18:22:33.396Z] Total : 62648.57 244.72 0.00 0.00 1018.82 237.89 7068.73 00:18:40.996 00:18:40.996 real 0m13.962s 00:18:40.996 user 0m5.441s 00:18:40.996 sys 0m6.342s 00:18:40.996 18:22:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.996 18:22:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:40.996 ************************************ 00:18:40.996 END TEST xnvme_bdevperf 00:18:40.996 ************************************ 00:18:41.253 18:22:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:41.254 18:22:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:41.254 18:22:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.254 18:22:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:41.254 ************************************ 00:18:41.254 START TEST xnvme_fio_plugin 00:18:41.254 ************************************ 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:41.254 18:22:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:41.254 { 00:18:41.254 "subsystems": [ 00:18:41.254 { 00:18:41.254 "subsystem": "bdev", 00:18:41.254 "config": [ 00:18:41.254 { 00:18:41.254 "params": { 00:18:41.254 "io_mechanism": "libaio", 00:18:41.254 "conserve_cpu": false, 00:18:41.254 "filename": "/dev/nvme0n1", 00:18:41.254 "name": "xnvme_bdev" 00:18:41.254 }, 00:18:41.254 "method": "bdev_xnvme_create" 00:18:41.254 }, 00:18:41.254 { 00:18:41.254 "method": "bdev_wait_for_examine" 00:18:41.254 } 00:18:41.254 ] 00:18:41.254 } 00:18:41.254 ] 00:18:41.254 } 00:18:41.254 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:41.254 fio-3.35 00:18:41.254 Starting 1 thread 00:18:47.820 00:18:47.820 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71222: Tue Nov 26 18:22:40 2024 00:18:47.820 read: IOPS=45.2k, BW=177MiB/s (185MB/s)(883MiB/5001msec) 00:18:47.820 slat (usec): min=3, max=10888, avg=19.13, stdev=34.05 00:18:47.820 clat (usec): min=77, max=12601, avg=830.47, stdev=580.35 00:18:47.820 lat (usec): min=145, max=13571, avg=849.60, stdev=586.11 00:18:47.820 clat percentiles (usec): 00:18:47.820 | 1.00th=[ 176], 5.00th=[ 255], 10.00th=[ 318], 20.00th=[ 420], 00:18:47.820 | 30.00th=[ 515], 40.00th=[ 611], 50.00th=[ 717], 60.00th=[ 832], 00:18:47.820 | 70.00th=[ 955], 80.00th=[ 1106], 90.00th=[ 1369], 95.00th=[ 1778], 00:18:47.820 | 99.00th=[ 3294], 99.50th=[ 3884], 99.90th=[ 4686], 99.95th=[ 5014], 00:18:47.820 | 99.99th=[11731] 00:18:47.820 bw ( KiB/s): min=160976, max=181488, per=96.79%, avg=175076.44, stdev=8061.10, samples=9 00:18:47.820 iops : min=40244, max=45372, avg=43769.11, stdev=2015.27, samples=9 00:18:47.820 lat (usec) : 100=0.01%, 250=4.64%, 500=23.77%, 750=24.81%, 1000=20.35% 00:18:47.820 lat (msec) : 2=22.63%, 4=3.38%, 10=0.41%, 20=0.01% 00:18:47.820 cpu : usr=28.20%, sys=53.44%, ctx=91, majf=0, minf=764 00:18:47.820 IO depths : 1=0.1%, 2=1.2%, 4=4.0%, 8=10.8%, 16=25.9%, 32=56.2%, >=64=1.8% 00:18:47.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.820 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:18:47.820 issued rwts: total=226144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:47.820 00:18:47.820 Run status group 0 (all jobs): 00:18:47.820 READ: bw=177MiB/s (185MB/s), 177MiB/s-177MiB/s (185MB/s-185MB/s), io=883MiB (926MB), run=5001-5001msec 00:18:48.759 ----------------------------------------------------- 00:18:48.759 Suppressions used: 00:18:48.759 count bytes template 00:18:48.759 1 11 /usr/src/fio/parse.c 00:18:48.759 1 8 libtcmalloc_minimal.so 00:18:48.759 1 904 libcrypto.so 00:18:48.759 ----------------------------------------------------- 00:18:48.759 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:48.759 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:48.759 { 00:18:48.759 "subsystems": [ 00:18:48.759 { 00:18:48.759 "subsystem": "bdev", 00:18:48.759 "config": [ 00:18:48.759 { 00:18:48.759 "params": { 00:18:48.759 "io_mechanism": "libaio", 00:18:48.759 "conserve_cpu": false, 00:18:48.759 "filename": "/dev/nvme0n1", 00:18:48.759 "name": "xnvme_bdev" 00:18:48.759 }, 00:18:48.759 "method": "bdev_xnvme_create" 00:18:48.759 }, 00:18:48.759 { 00:18:48.759 "method": "bdev_wait_for_examine" 00:18:48.759 } 00:18:48.759 ] 00:18:48.759 } 00:18:48.759 ] 00:18:48.759 } 00:18:49.020 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:49.020 fio-3.35 00:18:49.020 Starting 1 thread 00:18:55.597 00:18:55.597 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71319: Tue Nov 26 18:22:47 2024 00:18:55.597 write: IOPS=65.3k, BW=255MiB/s (268MB/s)(1276MiB/5001msec); 0 zone resets 00:18:55.597 slat (usec): min=3, max=831, avg=12.89, stdev=27.48 00:18:55.597 clat (usec): min=37, max=6801, avg=619.26, stdev=338.02 00:18:55.597 lat (usec): min=89, max=6865, avg=632.15, stdev=339.12 00:18:55.597 clat percentiles (usec): 00:18:55.597 | 1.00th=[ 163], 5.00th=[ 258], 10.00th=[ 314], 20.00th=[ 396], 00:18:55.597 | 30.00th=[ 453], 40.00th=[ 510], 50.00th=[ 570], 60.00th=[ 627], 00:18:55.597 | 70.00th=[ 693], 80.00th=[ 783], 90.00th=[ 930], 95.00th=[ 1123], 00:18:55.597 | 99.00th=[ 1860], 99.50th=[ 2409], 99.90th=[ 3916], 99.95th=[ 4359], 00:18:55.597 | 99.99th=[ 5145] 00:18:55.597 bw ( KiB/s): min=162331, max=316024, per=100.00%, avg=263766.56, stdev=50492.10, samples=9 00:18:55.597 iops : min=40582, max=79006, avg=65941.56, stdev=12623.21, samples=9 00:18:55.597 lat (usec) : 50=0.01%, 100=0.07%, 250=4.50%, 500=33.49%, 750=38.98% 00:18:55.597 lat (usec) : 1000=15.47% 00:18:55.597 lat (msec) : 2=6.66%, 4=0.73%, 10=0.09% 00:18:55.597 cpu : usr=35.90%, sys=52.38%, ctx=18, majf=0, minf=765 00:18:55.597 IO depths : 1=0.1%, 2=0.8%, 4=2.8%, 8=8.5%, 16=24.0%, 32=61.6%, >=64=2.1% 00:18:55.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.597 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:18:55.597 issued rwts: total=0,326781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.597 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.597 00:18:55.597 Run status group 0 (all jobs): 00:18:55.597 WRITE: bw=255MiB/s (268MB/s), 255MiB/s-255MiB/s (268MB/s-268MB/s), io=1276MiB (1338MB), run=5001-5001msec 00:18:56.163 ----------------------------------------------------- 00:18:56.163 Suppressions used: 00:18:56.163 count bytes template 00:18:56.163 1 11 /usr/src/fio/parse.c 00:18:56.163 1 8 libtcmalloc_minimal.so 00:18:56.163 1 904 libcrypto.so 00:18:56.163 ----------------------------------------------------- 00:18:56.163 00:18:56.163 00:18:56.163 real 0m15.131s 00:18:56.163 user 0m7.168s 00:18:56.163 sys 0m6.133s 00:18:56.163 ************************************ 00:18:56.163 END TEST xnvme_fio_plugin 00:18:56.163 ************************************ 00:18:56.163 18:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.163 18:22:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:56.422 18:22:49 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:56.422 18:22:49 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:56.422 18:22:49 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:56.422 18:22:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:56.422 18:22:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:56.422 18:22:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.422 18:22:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:56.422 ************************************ 00:18:56.422 START TEST xnvme_rpc 00:18:56.422 ************************************ 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71411 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71411 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71411 ']' 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.422 18:22:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.423 18:22:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.423 18:22:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.423 18:22:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.423 [2024-11-26 18:22:49.671396] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:18:56.423 [2024-11-26 18:22:49.671635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71411 ] 00:18:56.682 [2024-11-26 18:22:49.853640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.682 [2024-11-26 18:22:49.995093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.072 xnvme_bdev 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71411 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71411 ']' 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71411 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71411 00:18:58.072 killing process with pid 71411 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71411' 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71411 00:18:58.072 18:22:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71411 00:19:00.612 ************************************ 00:19:00.612 END TEST xnvme_rpc 00:19:00.612 00:19:00.612 real 0m4.386s 00:19:00.612 user 0m4.265s 00:19:00.612 sys 0m0.682s 00:19:00.612 18:22:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.612 18:22:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:00.612 ************************************ 00:19:00.871 18:22:53 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:00.871 18:22:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:00.871 18:22:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.871 18:22:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.871 ************************************ 00:19:00.871 START TEST xnvme_bdevperf 00:19:00.871 ************************************ 00:19:00.871 18:22:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:00.871 18:22:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:00.871 18:22:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:19:00.871 18:22:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:00.871 18:22:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:00.871 18:22:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:00.871 18:22:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:00.871 18:22:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:00.871 { 00:19:00.871 "subsystems": [ 00:19:00.871 { 00:19:00.871 "subsystem": "bdev", 00:19:00.871 "config": [ 00:19:00.871 { 00:19:00.871 "params": { 00:19:00.871 "io_mechanism": "libaio", 00:19:00.871 "conserve_cpu": true, 00:19:00.871 "filename": "/dev/nvme0n1", 00:19:00.871 "name": "xnvme_bdev" 00:19:00.871 }, 00:19:00.871 "method": "bdev_xnvme_create" 00:19:00.871 }, 00:19:00.871 { 00:19:00.871 "method": "bdev_wait_for_examine" 00:19:00.871 } 00:19:00.871 ] 00:19:00.871 } 00:19:00.871 ] 00:19:00.871 } 00:19:00.871 [2024-11-26 18:22:54.112231] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:19:00.871 [2024-11-26 18:22:54.112373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71497 ] 00:19:01.131 [2024-11-26 18:22:54.290226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.131 [2024-11-26 18:22:54.436834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.699 Running I/O for 5 seconds... 00:19:03.578 62836.00 IOPS, 245.45 MiB/s [2024-11-26T18:22:58.293Z] 57376.50 IOPS, 224.13 MiB/s [2024-11-26T18:22:59.256Z] 58472.00 IOPS, 228.41 MiB/s [2024-11-26T18:23:00.192Z] 59533.00 IOPS, 232.55 MiB/s [2024-11-26T18:23:00.192Z] 59208.80 IOPS, 231.28 MiB/s 00:19:06.857 Latency(us) 00:19:06.857 [2024-11-26T18:23:00.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.857 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:06.857 xnvme_bdev : 5.00 59177.22 231.16 0.00 0.00 1078.50 314.80 8127.61 00:19:06.857 [2024-11-26T18:23:00.192Z] =================================================================================================================== 00:19:06.857 [2024-11-26T18:23:00.192Z] Total : 59177.22 231.16 0.00 0.00 1078.50 314.80 8127.61 00:19:08.236 18:23:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:08.236 18:23:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:08.236 18:23:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:08.236 18:23:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:08.236 18:23:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:08.236 { 00:19:08.236 "subsystems": [ 00:19:08.236 { 00:19:08.236 "subsystem": "bdev", 00:19:08.236 "config": [ 00:19:08.236 { 00:19:08.236 "params": { 00:19:08.236 "io_mechanism": "libaio", 00:19:08.236 "conserve_cpu": true, 00:19:08.236 "filename": "/dev/nvme0n1", 00:19:08.236 "name": "xnvme_bdev" 00:19:08.236 }, 00:19:08.236 "method": "bdev_xnvme_create" 00:19:08.236 }, 00:19:08.236 { 00:19:08.236 "method": "bdev_wait_for_examine" 00:19:08.236 } 00:19:08.236 ] 00:19:08.236 } 00:19:08.236 ] 00:19:08.236 } 00:19:08.236 [2024-11-26 18:23:01.250589] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:19:08.236 [2024-11-26 18:23:01.250810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71578 ] 00:19:08.236 [2024-11-26 18:23:01.432750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.496 [2024-11-26 18:23:01.578791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.754 Running I/O for 5 seconds... 00:19:11.070 66984.00 IOPS, 261.66 MiB/s [2024-11-26T18:23:05.342Z] 66562.50 IOPS, 260.01 MiB/s [2024-11-26T18:23:06.280Z] 64789.33 IOPS, 253.08 MiB/s [2024-11-26T18:23:07.218Z] 64519.75 IOPS, 252.03 MiB/s 00:19:13.883 Latency(us) 00:19:13.883 [2024-11-26T18:23:07.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.883 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:13.883 xnvme_bdev : 5.00 64789.55 253.08 0.00 0.00 985.06 175.29 2589.96 00:19:13.883 [2024-11-26T18:23:07.218Z] =================================================================================================================== 00:19:13.883 [2024-11-26T18:23:07.218Z] Total : 64789.55 253.08 0.00 0.00 985.06 175.29 2589.96 00:19:15.263 00:19:15.263 real 0m14.289s 00:19:15.263 user 0m6.101s 00:19:15.263 sys 0m6.646s 00:19:15.263 18:23:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.263 18:23:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:15.263 ************************************ 00:19:15.263 END TEST xnvme_bdevperf 00:19:15.263 ************************************ 00:19:15.263 18:23:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:15.263 18:23:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:15.263 18:23:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.263 18:23:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:15.263 ************************************ 00:19:15.263 START TEST xnvme_fio_plugin 00:19:15.263 ************************************ 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:15.263 { 00:19:15.263 "subsystems": [ 00:19:15.263 { 00:19:15.263 "subsystem": "bdev", 00:19:15.263 "config": [ 00:19:15.263 { 00:19:15.263 "params": { 00:19:15.263 "io_mechanism": "libaio", 00:19:15.263 "conserve_cpu": true, 00:19:15.263 "filename": "/dev/nvme0n1", 00:19:15.263 "name": "xnvme_bdev" 00:19:15.263 }, 00:19:15.263 "method": "bdev_xnvme_create" 00:19:15.263 }, 00:19:15.263 { 00:19:15.263 "method": "bdev_wait_for_examine" 00:19:15.263 } 00:19:15.263 ] 00:19:15.263 } 00:19:15.263 ] 00:19:15.263 } 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:15.263 18:23:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:15.523 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:15.523 fio-3.35 00:19:15.523 Starting 1 thread 00:19:22.098 00:19:22.098 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71703: Tue Nov 26 18:23:14 2024 00:19:22.098 read: IOPS=56.2k, BW=219MiB/s (230MB/s)(1097MiB/5001msec) 00:19:22.098 slat (usec): min=3, max=555, avg=15.15, stdev=26.11 00:19:22.098 clat (usec): min=81, max=6530, avg=697.13, stdev=471.46 00:19:22.098 lat (usec): min=100, max=6585, avg=712.28, stdev=476.22 00:19:22.098 clat percentiles (usec): 00:19:22.098 | 1.00th=[ 169], 5.00th=[ 262], 10.00th=[ 326], 20.00th=[ 404], 00:19:22.098 | 30.00th=[ 457], 40.00th=[ 515], 50.00th=[ 578], 60.00th=[ 652], 00:19:22.098 | 70.00th=[ 758], 80.00th=[ 914], 90.00th=[ 1156], 95.00th=[ 1434], 00:19:22.098 | 99.00th=[ 2802], 99.50th=[ 3458], 99.90th=[ 4555], 99.95th=[ 4883], 00:19:22.098 | 99.99th=[ 5538] 00:19:22.098 bw ( KiB/s): min=166496, max=338824, per=93.10%, avg=209141.56, stdev=52921.68, samples=9 00:19:22.098 iops : min=41624, max=84706, avg=52285.22, stdev=13230.46, samples=9 00:19:22.098 lat (usec) : 100=0.02%, 250=4.26%, 500=33.56%, 750=31.70%, 1000=14.83% 00:19:22.098 lat (msec) : 2=13.52%, 4=1.85%, 10=0.26% 00:19:22.098 cpu : usr=30.74%, sys=53.82%, ctx=37, majf=0, minf=764 00:19:22.098 IO depths : 1=0.2%, 2=1.0%, 4=3.4%, 8=9.2%, 16=23.9%, 32=60.3%, >=64=2.1% 00:19:22.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.098 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:19:22.098 issued rwts: total=280860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.098 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.098 00:19:22.098 Run status group 0 (all jobs): 00:19:22.098 READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=1097MiB (1150MB), run=5001-5001msec 00:19:22.667 ----------------------------------------------------- 00:19:22.667 Suppressions used: 00:19:22.667 count bytes template 00:19:22.667 1 11 /usr/src/fio/parse.c 00:19:22.667 1 8 libtcmalloc_minimal.so 00:19:22.667 1 904 libcrypto.so 00:19:22.667 ----------------------------------------------------- 00:19:22.667 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:22.668 18:23:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:22.927 18:23:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:22.927 18:23:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:22.927 18:23:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:22.927 18:23:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:22.927 18:23:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:22.927 { 00:19:22.927 "subsystems": [ 00:19:22.927 { 00:19:22.927 "subsystem": "bdev", 00:19:22.927 "config": [ 00:19:22.927 { 00:19:22.927 "params": { 00:19:22.927 "io_mechanism": "libaio", 00:19:22.927 "conserve_cpu": true, 00:19:22.927 "filename": "/dev/nvme0n1", 00:19:22.927 "name": "xnvme_bdev" 00:19:22.927 }, 00:19:22.927 "method": "bdev_xnvme_create" 00:19:22.927 }, 00:19:22.927 { 00:19:22.927 "method": "bdev_wait_for_examine" 00:19:22.927 } 00:19:22.927 ] 00:19:22.927 } 00:19:22.927 ] 00:19:22.927 } 00:19:22.927 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:22.927 fio-3.35 00:19:22.927 Starting 1 thread 00:19:29.497 00:19:29.497 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71806: Tue Nov 26 18:23:22 2024 00:19:29.497 write: IOPS=71.4k, BW=279MiB/s (293MB/s)(1395MiB/5001msec); 0 zone resets 00:19:29.497 slat (usec): min=3, max=2931, avg=11.74, stdev=25.75 00:19:29.497 clat (usec): min=60, max=5148, avg=573.01, stdev=270.57 00:19:29.497 lat (usec): min=136, max=5201, avg=584.76, stdev=270.06 00:19:29.497 clat percentiles (usec): 00:19:29.497 | 1.00th=[ 163], 5.00th=[ 258], 10.00th=[ 310], 20.00th=[ 388], 00:19:29.497 | 30.00th=[ 437], 40.00th=[ 486], 50.00th=[ 537], 60.00th=[ 586], 00:19:29.497 | 70.00th=[ 652], 80.00th=[ 717], 90.00th=[ 848], 95.00th=[ 996], 00:19:29.497 | 99.00th=[ 1418], 99.50th=[ 1795], 99.90th=[ 3261], 99.95th=[ 3687], 00:19:29.497 | 99.99th=[ 4293] 00:19:29.497 bw ( KiB/s): min=219968, max=337440, per=99.47%, avg=284147.89, stdev=43003.22, samples=9 00:19:29.497 iops : min=54992, max=84360, avg=71036.89, stdev=10750.72, samples=9 00:19:29.497 lat (usec) : 100=0.05%, 250=4.49%, 500=38.66%, 750=39.90%, 1000=12.10% 00:19:29.497 lat (msec) : 2=4.43%, 4=0.35%, 10=0.03% 00:19:29.497 cpu : usr=37.46%, sys=52.00%, ctx=11, majf=0, minf=765 00:19:29.497 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=8.1%, 16=23.6%, 32=62.9%, >=64=2.2% 00:19:29.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.497 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:19:29.497 issued rwts: total=0,357141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.497 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:29.497 00:19:29.497 Run status group 0 (all jobs): 00:19:29.497 WRITE: bw=279MiB/s (293MB/s), 279MiB/s-279MiB/s (293MB/s-293MB/s), io=1395MiB (1463MB), run=5001-5001msec 00:19:30.435 ----------------------------------------------------- 00:19:30.435 Suppressions used: 00:19:30.436 count bytes template 00:19:30.436 1 11 /usr/src/fio/parse.c 00:19:30.436 1 8 libtcmalloc_minimal.so 00:19:30.436 1 904 libcrypto.so 00:19:30.436 ----------------------------------------------------- 00:19:30.436 00:19:30.436 00:19:30.436 real 0m15.274s 00:19:30.436 user 0m7.408s 00:19:30.436 sys 0m6.225s 00:19:30.436 ************************************ 00:19:30.436 END TEST xnvme_fio_plugin 00:19:30.436 ************************************ 00:19:30.436 18:23:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.436 18:23:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:30.436 18:23:23 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:30.436 18:23:23 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:19:30.436 18:23:23 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:19:30.436 18:23:23 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:19:30.436 18:23:23 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:30.436 18:23:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:30.436 18:23:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:30.436 18:23:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:30.436 18:23:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:30.436 18:23:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:30.436 18:23:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.436 18:23:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:30.436 ************************************ 00:19:30.436 START TEST xnvme_rpc 00:19:30.436 ************************************ 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71896 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71896 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71896 ']' 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.436 18:23:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:30.696 [2024-11-26 18:23:23.813741] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:19:30.696 [2024-11-26 18:23:23.813864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71896 ] 00:19:30.696 [2024-11-26 18:23:23.996150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.956 [2024-11-26 18:23:24.133343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:31.895 xnvme_bdev 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:31.895 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71896 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71896 ']' 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71896 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71896 00:19:32.155 killing process with pid 71896 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71896' 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71896 00:19:32.155 18:23:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71896 00:19:35.446 00:19:35.446 real 0m4.401s 00:19:35.446 user 0m4.280s 00:19:35.446 sys 0m0.706s 00:19:35.446 18:23:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.446 ************************************ 00:19:35.446 END TEST xnvme_rpc 00:19:35.446 ************************************ 00:19:35.446 18:23:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:35.446 18:23:28 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:35.446 18:23:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:35.446 18:23:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.446 18:23:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:35.446 ************************************ 00:19:35.446 START TEST xnvme_bdevperf 00:19:35.446 ************************************ 00:19:35.446 18:23:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:35.446 18:23:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:35.446 18:23:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:19:35.446 18:23:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:35.446 18:23:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:35.446 18:23:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:35.446 18:23:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:35.446 18:23:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:35.446 { 00:19:35.446 "subsystems": [ 00:19:35.446 { 00:19:35.446 "subsystem": "bdev", 00:19:35.446 "config": [ 00:19:35.446 { 00:19:35.446 "params": { 00:19:35.446 "io_mechanism": "io_uring", 00:19:35.446 "conserve_cpu": false, 00:19:35.446 "filename": "/dev/nvme0n1", 00:19:35.446 "name": "xnvme_bdev" 00:19:35.446 }, 00:19:35.446 "method": "bdev_xnvme_create" 00:19:35.446 }, 00:19:35.446 { 00:19:35.446 "method": "bdev_wait_for_examine" 00:19:35.446 } 00:19:35.446 ] 00:19:35.446 } 00:19:35.446 ] 00:19:35.446 } 00:19:35.446 [2024-11-26 18:23:28.272396] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:19:35.446 [2024-11-26 18:23:28.272604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71985 ] 00:19:35.446 [2024-11-26 18:23:28.455269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.446 [2024-11-26 18:23:28.596942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.705 Running I/O for 5 seconds... 00:19:38.020 25088.00 IOPS, 98.00 MiB/s [2024-11-26T18:23:32.292Z] 24256.00 IOPS, 94.75 MiB/s [2024-11-26T18:23:33.231Z] 24042.67 IOPS, 93.92 MiB/s [2024-11-26T18:23:34.169Z] 24128.00 IOPS, 94.25 MiB/s 00:19:40.834 Latency(us) 00:19:40.834 [2024-11-26T18:23:34.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.834 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:40.834 xnvme_bdev : 5.01 24171.29 94.42 0.00 0.00 2637.74 1051.72 8986.16 00:19:40.834 [2024-11-26T18:23:34.169Z] =================================================================================================================== 00:19:40.834 [2024-11-26T18:23:34.169Z] Total : 24171.29 94.42 0.00 0.00 2637.74 1051.72 8986.16 00:19:42.213 18:23:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:42.213 18:23:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:42.213 18:23:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:42.213 18:23:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:42.213 18:23:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:42.213 { 00:19:42.213 "subsystems": [ 00:19:42.213 { 00:19:42.213 "subsystem": "bdev", 00:19:42.213 "config": [ 00:19:42.213 { 00:19:42.213 "params": { 00:19:42.213 "io_mechanism": "io_uring", 00:19:42.213 "conserve_cpu": false, 00:19:42.213 "filename": "/dev/nvme0n1", 00:19:42.213 "name": "xnvme_bdev" 00:19:42.213 }, 00:19:42.213 "method": "bdev_xnvme_create" 00:19:42.213 }, 00:19:42.213 { 00:19:42.213 "method": "bdev_wait_for_examine" 00:19:42.213 } 00:19:42.213 ] 00:19:42.213 } 00:19:42.213 ] 00:19:42.213 } 00:19:42.213 [2024-11-26 18:23:35.371588] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:19:42.213 [2024-11-26 18:23:35.371724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72067 ] 00:19:42.472 [2024-11-26 18:23:35.547147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.472 [2024-11-26 18:23:35.716216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.077 Running I/O for 5 seconds... 00:19:44.954 25664.00 IOPS, 100.25 MiB/s [2024-11-26T18:23:39.227Z] 24256.00 IOPS, 94.75 MiB/s [2024-11-26T18:23:40.166Z] 23786.67 IOPS, 92.92 MiB/s [2024-11-26T18:23:41.541Z] 23664.00 IOPS, 92.44 MiB/s 00:19:48.206 Latency(us) 00:19:48.206 [2024-11-26T18:23:41.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.206 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:48.206 xnvme_bdev : 5.01 23458.74 91.64 0.00 0.00 2717.27 994.49 9043.40 00:19:48.206 [2024-11-26T18:23:41.541Z] =================================================================================================================== 00:19:48.206 [2024-11-26T18:23:41.541Z] Total : 23458.74 91.64 0.00 0.00 2717.27 994.49 9043.40 00:19:49.141 00:19:49.141 real 0m14.225s 00:19:49.141 user 0m8.140s 00:19:49.141 sys 0m5.861s 00:19:49.141 18:23:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:49.141 18:23:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:49.141 ************************************ 00:19:49.141 END TEST xnvme_bdevperf 00:19:49.141 ************************************ 00:19:49.141 18:23:42 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:49.141 18:23:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:49.141 18:23:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.141 18:23:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:49.141 ************************************ 00:19:49.141 START TEST xnvme_fio_plugin 00:19:49.141 ************************************ 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:49.141 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:49.400 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:49.400 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:49.400 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:49.400 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:49.400 18:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:49.400 { 00:19:49.400 "subsystems": [ 00:19:49.400 { 00:19:49.400 "subsystem": "bdev", 00:19:49.400 "config": [ 00:19:49.400 { 00:19:49.400 "params": { 00:19:49.400 "io_mechanism": "io_uring", 00:19:49.400 "conserve_cpu": false, 00:19:49.400 "filename": "/dev/nvme0n1", 00:19:49.400 "name": "xnvme_bdev" 00:19:49.400 }, 00:19:49.400 "method": "bdev_xnvme_create" 00:19:49.400 }, 00:19:49.400 { 00:19:49.400 "method": "bdev_wait_for_examine" 00:19:49.400 } 00:19:49.400 ] 00:19:49.400 } 00:19:49.400 ] 00:19:49.400 } 00:19:49.400 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:49.400 fio-3.35 00:19:49.400 Starting 1 thread 00:19:55.987 00:19:55.987 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72192: Tue Nov 26 18:23:48 2024 00:19:55.987 read: IOPS=23.6k, BW=92.3MiB/s (96.8MB/s)(462MiB/5002msec) 00:19:55.987 slat (nsec): min=6168, max=76305, avg=10028.36, stdev=2592.47 00:19:55.987 clat (usec): min=1114, max=6341, avg=2321.34, stdev=249.56 00:19:55.987 lat (usec): min=1123, max=6353, avg=2331.36, stdev=249.94 00:19:55.987 clat percentiles (usec): 00:19:55.987 | 1.00th=[ 1860], 5.00th=[ 1958], 10.00th=[ 2024], 20.00th=[ 2114], 00:19:55.987 | 30.00th=[ 2180], 40.00th=[ 2245], 50.00th=[ 2311], 60.00th=[ 2376], 00:19:55.987 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2638], 95.00th=[ 2704], 00:19:55.987 | 99.00th=[ 2802], 99.50th=[ 2868], 99.90th=[ 3359], 99.95th=[ 5669], 00:19:55.987 | 99.99th=[ 6194] 00:19:55.987 bw ( KiB/s): min=93696, max=97280, per=100.00%, avg=94663.11, stdev=1038.12, samples=9 00:19:55.987 iops : min=23424, max=24320, avg=23665.78, stdev=259.53, samples=9 00:19:55.987 lat (msec) : 2=8.22%, 4=91.73%, 10=0.05% 00:19:55.987 cpu : usr=44.65%, sys=54.07%, ctx=11, majf=0, minf=762 00:19:55.987 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:55.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.987 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:55.987 issued rwts: total=118178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.987 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.987 00:19:55.987 Run status group 0 (all jobs): 00:19:55.987 READ: bw=92.3MiB/s (96.8MB/s), 92.3MiB/s-92.3MiB/s (96.8MB/s-96.8MB/s), io=462MiB (484MB), run=5002-5002msec 00:19:56.559 ----------------------------------------------------- 00:19:56.559 Suppressions used: 00:19:56.559 count bytes template 00:19:56.559 1 11 /usr/src/fio/parse.c 00:19:56.559 1 8 libtcmalloc_minimal.so 00:19:56.559 1 904 libcrypto.so 00:19:56.559 ----------------------------------------------------- 00:19:56.559 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:56.559 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:56.818 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.818 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:56.818 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:56.818 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:56.818 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:56.818 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:56.818 18:23:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:56.818 { 00:19:56.818 "subsystems": [ 00:19:56.818 { 00:19:56.818 "subsystem": "bdev", 00:19:56.818 "config": [ 00:19:56.818 { 00:19:56.818 "params": { 00:19:56.818 "io_mechanism": "io_uring", 00:19:56.818 "conserve_cpu": false, 00:19:56.818 "filename": "/dev/nvme0n1", 00:19:56.818 "name": "xnvme_bdev" 00:19:56.818 }, 00:19:56.818 "method": "bdev_xnvme_create" 00:19:56.818 }, 00:19:56.818 { 00:19:56.818 "method": "bdev_wait_for_examine" 00:19:56.818 } 00:19:56.818 ] 00:19:56.818 } 00:19:56.818 ] 00:19:56.818 } 00:19:56.818 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:56.818 fio-3.35 00:19:56.818 Starting 1 thread 00:20:03.380 00:20:03.380 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72289: Tue Nov 26 18:23:55 2024 00:20:03.380 write: IOPS=22.8k, BW=89.0MiB/s (93.3MB/s)(445MiB/5002msec); 0 zone resets 00:20:03.380 slat (usec): min=6, max=248, avg=10.62, stdev= 3.06 00:20:03.380 clat (usec): min=1622, max=11934, avg=2397.69, stdev=319.41 00:20:03.380 lat (usec): min=1633, max=11946, avg=2408.31, stdev=319.55 00:20:03.380 clat percentiles (usec): 00:20:03.380 | 1.00th=[ 1958], 5.00th=[ 2040], 10.00th=[ 2089], 20.00th=[ 2180], 00:20:03.380 | 30.00th=[ 2245], 40.00th=[ 2311], 50.00th=[ 2376], 60.00th=[ 2474], 00:20:03.380 | 70.00th=[ 2540], 80.00th=[ 2606], 90.00th=[ 2704], 95.00th=[ 2769], 00:20:03.380 | 99.00th=[ 2868], 99.50th=[ 2933], 99.90th=[ 3556], 99.95th=[11207], 00:20:03.380 | 99.99th=[11863] 00:20:03.380 bw ( KiB/s): min=90112, max=91648, per=100.00%, avg=91172.67, stdev=481.67, samples=9 00:20:03.380 iops : min=22528, max=22912, avg=22793.11, stdev=120.45, samples=9 00:20:03.380 lat (msec) : 2=2.77%, 4=97.17%, 10=0.01%, 20=0.06% 00:20:03.380 cpu : usr=46.35%, sys=52.27%, ctx=53, majf=0, minf=763 00:20:03.380 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:03.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.380 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:03.380 issued rwts: total=0,113984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.380 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:03.380 00:20:03.380 Run status group 0 (all jobs): 00:20:03.380 WRITE: bw=89.0MiB/s (93.3MB/s), 89.0MiB/s-89.0MiB/s (93.3MB/s-93.3MB/s), io=445MiB (467MB), run=5002-5002msec 00:20:03.948 ----------------------------------------------------- 00:20:03.948 Suppressions used: 00:20:03.948 count bytes template 00:20:03.948 1 11 /usr/src/fio/parse.c 00:20:03.948 1 8 libtcmalloc_minimal.so 00:20:03.948 1 904 libcrypto.so 00:20:03.948 ----------------------------------------------------- 00:20:03.948 00:20:03.948 00:20:03.948 real 0m14.708s 00:20:03.948 user 0m8.341s 00:20:03.948 sys 0m5.982s 00:20:03.948 18:23:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.948 18:23:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:03.948 ************************************ 00:20:03.948 END TEST xnvme_fio_plugin 00:20:03.948 ************************************ 00:20:03.948 18:23:57 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:03.948 18:23:57 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:03.948 18:23:57 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:03.948 18:23:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:03.948 18:23:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:03.948 18:23:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.948 18:23:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.948 ************************************ 00:20:03.948 START TEST xnvme_rpc 00:20:03.948 ************************************ 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72381 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72381 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72381 ']' 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.948 18:23:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:04.207 [2024-11-26 18:23:57.332637] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:20:04.207 [2024-11-26 18:23:57.332748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72381 ] 00:20:04.207 [2024-11-26 18:23:57.506152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.465 [2024-11-26 18:23:57.614136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.401 xnvme_bdev 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.401 18:23:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72381 00:20:05.402 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72381 ']' 00:20:05.402 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72381 00:20:05.402 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:05.402 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.402 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72381 00:20:05.402 killing process with pid 72381 00:20:05.402 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.402 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.402 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72381' 00:20:05.402 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72381 00:20:05.402 18:23:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72381 00:20:07.936 00:20:07.936 real 0m3.708s 00:20:07.936 user 0m3.802s 00:20:07.936 sys 0m0.494s 00:20:07.936 ************************************ 00:20:07.936 END TEST xnvme_rpc 00:20:07.936 ************************************ 00:20:07.936 18:24:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.936 18:24:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:07.936 18:24:00 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:07.936 18:24:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:07.936 18:24:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.936 18:24:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:07.936 ************************************ 00:20:07.936 START TEST xnvme_bdevperf 00:20:07.936 ************************************ 00:20:07.936 18:24:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:07.936 18:24:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:07.936 18:24:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:20:07.936 18:24:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:07.936 18:24:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:07.936 18:24:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:07.936 18:24:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:07.936 18:24:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:07.936 { 00:20:07.936 "subsystems": [ 00:20:07.936 { 00:20:07.936 "subsystem": "bdev", 00:20:07.936 "config": [ 00:20:07.936 { 00:20:07.936 "params": { 00:20:07.936 "io_mechanism": "io_uring", 00:20:07.936 "conserve_cpu": true, 00:20:07.936 "filename": "/dev/nvme0n1", 00:20:07.936 "name": "xnvme_bdev" 00:20:07.936 }, 00:20:07.936 "method": "bdev_xnvme_create" 00:20:07.936 }, 00:20:07.936 { 00:20:07.936 "method": "bdev_wait_for_examine" 00:20:07.936 } 00:20:07.936 ] 00:20:07.936 } 00:20:07.936 ] 00:20:07.936 } 00:20:07.936 [2024-11-26 18:24:01.099850] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:20:07.936 [2024-11-26 18:24:01.099996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72456 ] 00:20:08.194 [2024-11-26 18:24:01.271428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.194 [2024-11-26 18:24:01.381879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.451 Running I/O for 5 seconds... 00:20:10.798 28032.00 IOPS, 109.50 MiB/s [2024-11-26T18:24:05.066Z] 27104.00 IOPS, 105.88 MiB/s [2024-11-26T18:24:06.001Z] 27498.67 IOPS, 107.42 MiB/s [2024-11-26T18:24:06.935Z] 27136.00 IOPS, 106.00 MiB/s 00:20:13.600 Latency(us) 00:20:13.600 [2024-11-26T18:24:06.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.600 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:13.600 xnvme_bdev : 5.00 26587.45 103.86 0.00 0.00 2398.16 1130.42 9329.58 00:20:13.600 [2024-11-26T18:24:06.935Z] =================================================================================================================== 00:20:13.600 [2024-11-26T18:24:06.935Z] Total : 26587.45 103.86 0.00 0.00 2398.16 1130.42 9329.58 00:20:14.535 18:24:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:14.535 18:24:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:14.535 18:24:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:14.535 18:24:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:14.535 18:24:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:14.535 { 00:20:14.535 "subsystems": [ 00:20:14.535 { 00:20:14.535 "subsystem": "bdev", 00:20:14.535 "config": [ 00:20:14.535 { 00:20:14.535 "params": { 00:20:14.535 "io_mechanism": "io_uring", 00:20:14.535 "conserve_cpu": true, 00:20:14.535 "filename": "/dev/nvme0n1", 00:20:14.535 "name": "xnvme_bdev" 00:20:14.535 }, 00:20:14.535 "method": "bdev_xnvme_create" 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "method": "bdev_wait_for_examine" 00:20:14.535 } 00:20:14.535 ] 00:20:14.535 } 00:20:14.535 ] 00:20:14.535 } 00:20:14.535 [2024-11-26 18:24:07.860410] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:20:14.536 [2024-11-26 18:24:07.860516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72542 ] 00:20:14.794 [2024-11-26 18:24:08.032161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.053 [2024-11-26 18:24:08.138598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.312 Running I/O for 5 seconds... 00:20:17.185 24000.00 IOPS, 93.75 MiB/s [2024-11-26T18:24:11.893Z] 23680.00 IOPS, 92.50 MiB/s [2024-11-26T18:24:12.825Z] 24149.33 IOPS, 94.33 MiB/s [2024-11-26T18:24:13.756Z] 24384.00 IOPS, 95.25 MiB/s 00:20:20.421 Latency(us) 00:20:20.421 [2024-11-26T18:24:13.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.421 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:20.421 xnvme_bdev : 5.00 24064.42 94.00 0.00 0.00 2649.36 1094.65 9329.58 00:20:20.421 [2024-11-26T18:24:13.756Z] =================================================================================================================== 00:20:20.421 [2024-11-26T18:24:13.756Z] Total : 24064.42 94.00 0.00 0.00 2649.36 1094.65 9329.58 00:20:21.355 00:20:21.355 real 0m13.521s 00:20:21.355 user 0m7.842s 00:20:21.355 sys 0m5.237s 00:20:21.355 18:24:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.355 ************************************ 00:20:21.355 END TEST xnvme_bdevperf 00:20:21.355 ************************************ 00:20:21.355 18:24:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:21.355 18:24:14 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:21.355 18:24:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:21.355 18:24:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.355 18:24:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:21.355 ************************************ 00:20:21.355 START TEST xnvme_fio_plugin 00:20:21.355 ************************************ 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:21.355 18:24:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:21.355 { 00:20:21.355 "subsystems": [ 00:20:21.355 { 00:20:21.355 "subsystem": "bdev", 00:20:21.355 "config": [ 00:20:21.355 { 00:20:21.355 "params": { 00:20:21.355 "io_mechanism": "io_uring", 00:20:21.355 "conserve_cpu": true, 00:20:21.355 "filename": "/dev/nvme0n1", 00:20:21.355 "name": "xnvme_bdev" 00:20:21.355 }, 00:20:21.355 "method": "bdev_xnvme_create" 00:20:21.355 }, 00:20:21.355 { 00:20:21.355 "method": "bdev_wait_for_examine" 00:20:21.355 } 00:20:21.355 ] 00:20:21.355 } 00:20:21.355 ] 00:20:21.355 } 00:20:21.613 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:21.613 fio-3.35 00:20:21.613 Starting 1 thread 00:20:28.172 00:20:28.172 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72662: Tue Nov 26 18:24:20 2024 00:20:28.172 read: IOPS=23.3k, BW=91.2MiB/s (95.6MB/s)(456MiB/5002msec) 00:20:28.172 slat (nsec): min=5210, max=82160, avg=9646.49, stdev=2561.75 00:20:28.172 clat (usec): min=1418, max=10371, avg=2366.97, stdev=304.03 00:20:28.172 lat (usec): min=1428, max=10382, avg=2376.61, stdev=304.45 00:20:28.172 clat percentiles (usec): 00:20:28.172 | 1.00th=[ 1860], 5.00th=[ 1991], 10.00th=[ 2057], 20.00th=[ 2147], 00:20:28.172 | 30.00th=[ 2212], 40.00th=[ 2278], 50.00th=[ 2343], 60.00th=[ 2442], 00:20:28.172 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2704], 95.00th=[ 2769], 00:20:28.172 | 99.00th=[ 2868], 99.50th=[ 2900], 99.90th=[ 3556], 99.95th=[ 9765], 00:20:28.172 | 99.99th=[10290] 00:20:28.172 bw ( KiB/s): min=91648, max=94720, per=99.70%, avg=93070.22, stdev=1016.86, samples=9 00:20:28.172 iops : min=22912, max=23680, avg=23267.56, stdev=254.22, samples=9 00:20:28.172 lat (msec) : 2=6.01%, 4=93.93%, 10=0.03%, 20=0.03% 00:20:28.172 cpu : usr=47.93%, sys=48.51%, ctx=9, majf=0, minf=762 00:20:28.172 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:28.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.172 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:28.172 issued rwts: total=116736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:28.172 00:20:28.172 Run status group 0 (all jobs): 00:20:28.172 READ: bw=91.2MiB/s (95.6MB/s), 91.2MiB/s-91.2MiB/s (95.6MB/s-95.6MB/s), io=456MiB (478MB), run=5002-5002msec 00:20:28.737 ----------------------------------------------------- 00:20:28.737 Suppressions used: 00:20:28.737 count bytes template 00:20:28.737 1 11 /usr/src/fio/parse.c 00:20:28.737 1 8 libtcmalloc_minimal.so 00:20:28.737 1 904 libcrypto.so 00:20:28.737 ----------------------------------------------------- 00:20:28.737 00:20:28.737 18:24:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:28.738 18:24:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.738 { 00:20:28.738 "subsystems": [ 00:20:28.738 { 00:20:28.738 "subsystem": "bdev", 00:20:28.738 "config": [ 00:20:28.738 { 00:20:28.738 "params": { 00:20:28.738 "io_mechanism": "io_uring", 00:20:28.738 "conserve_cpu": true, 00:20:28.738 "filename": "/dev/nvme0n1", 00:20:28.738 "name": "xnvme_bdev" 00:20:28.738 }, 00:20:28.738 "method": "bdev_xnvme_create" 00:20:28.738 }, 00:20:28.738 { 00:20:28.738 "method": "bdev_wait_for_examine" 00:20:28.738 } 00:20:28.738 ] 00:20:28.738 } 00:20:28.738 ] 00:20:28.738 } 00:20:28.996 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:28.996 fio-3.35 00:20:28.996 Starting 1 thread 00:20:35.554 00:20:35.554 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72754: Tue Nov 26 18:24:27 2024 00:20:35.554 write: IOPS=22.8k, BW=88.9MiB/s (93.3MB/s)(445MiB/5001msec); 0 zone resets 00:20:35.554 slat (nsec): min=5344, max=93021, avg=10337.61, stdev=2819.76 00:20:35.554 clat (usec): min=1521, max=6413, avg=2407.14, stdev=253.55 00:20:35.554 lat (usec): min=1532, max=6427, avg=2417.48, stdev=253.94 00:20:35.554 clat percentiles (usec): 00:20:35.554 | 1.00th=[ 1909], 5.00th=[ 2040], 10.00th=[ 2114], 20.00th=[ 2180], 00:20:35.554 | 30.00th=[ 2245], 40.00th=[ 2343], 50.00th=[ 2409], 60.00th=[ 2474], 00:20:35.554 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 2737], 95.00th=[ 2769], 00:20:35.554 | 99.00th=[ 2868], 99.50th=[ 2933], 99.90th=[ 3130], 99.95th=[ 5735], 00:20:35.554 | 99.99th=[ 6259] 00:20:35.554 bw ( KiB/s): min=88576, max=94208, per=100.00%, avg=91363.56, stdev=1865.66, samples=9 00:20:35.554 iops : min=22144, max=23552, avg=22840.89, stdev=466.42, samples=9 00:20:35.554 lat (msec) : 2=3.05%, 4=96.89%, 10=0.06% 00:20:35.554 cpu : usr=46.68%, sys=49.74%, ctx=10, majf=0, minf=763 00:20:35.554 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:35.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.554 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:35.554 issued rwts: total=0,113856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.554 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:35.554 00:20:35.554 Run status group 0 (all jobs): 00:20:35.554 WRITE: bw=88.9MiB/s (93.3MB/s), 88.9MiB/s-88.9MiB/s (93.3MB/s-93.3MB/s), io=445MiB (466MB), run=5001-5001msec 00:20:36.123 ----------------------------------------------------- 00:20:36.123 Suppressions used: 00:20:36.123 count bytes template 00:20:36.123 1 11 /usr/src/fio/parse.c 00:20:36.123 1 8 libtcmalloc_minimal.so 00:20:36.123 1 904 libcrypto.so 00:20:36.123 ----------------------------------------------------- 00:20:36.123 00:20:36.123 ************************************ 00:20:36.123 END TEST xnvme_fio_plugin 00:20:36.123 ************************************ 00:20:36.123 00:20:36.123 real 0m14.734s 00:20:36.123 user 0m8.635s 00:20:36.123 sys 0m5.499s 00:20:36.123 18:24:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.123 18:24:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:36.123 18:24:29 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:20:36.123 18:24:29 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:20:36.123 18:24:29 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:20:36.123 18:24:29 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:20:36.123 18:24:29 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:20:36.123 18:24:29 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:36.123 18:24:29 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:20:36.123 18:24:29 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:20:36.123 18:24:29 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:36.123 18:24:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.123 18:24:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.123 18:24:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:36.123 ************************************ 00:20:36.123 START TEST xnvme_rpc 00:20:36.123 ************************************ 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72846 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72846 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72846 ']' 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.123 18:24:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 [2024-11-26 18:24:29.498856] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:20:36.383 [2024-11-26 18:24:29.498962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72846 ] 00:20:36.383 [2024-11-26 18:24:29.671643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.643 [2024-11-26 18:24:29.806017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.582 xnvme_bdev 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:37.582 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.842 18:24:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72846 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72846 ']' 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72846 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72846 00:20:37.842 killing process with pid 72846 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72846' 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72846 00:20:37.842 18:24:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72846 00:20:40.377 ************************************ 00:20:40.377 END TEST xnvme_rpc 00:20:40.377 ************************************ 00:20:40.377 00:20:40.377 real 0m3.966s 00:20:40.377 user 0m3.897s 00:20:40.377 sys 0m0.663s 00:20:40.377 18:24:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.377 18:24:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:40.377 18:24:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:40.377 18:24:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:40.377 18:24:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.377 18:24:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:40.377 ************************************ 00:20:40.377 START TEST xnvme_bdevperf 00:20:40.377 ************************************ 00:20:40.377 18:24:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:40.377 18:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:40.377 18:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:20:40.377 18:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:40.377 18:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:40.377 18:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:40.377 18:24:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:40.377 18:24:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:40.377 { 00:20:40.377 "subsystems": [ 00:20:40.377 { 00:20:40.377 "subsystem": "bdev", 00:20:40.377 "config": [ 00:20:40.377 { 00:20:40.377 "params": { 00:20:40.377 "io_mechanism": "io_uring_cmd", 00:20:40.377 "conserve_cpu": false, 00:20:40.377 "filename": "/dev/ng0n1", 00:20:40.378 "name": "xnvme_bdev" 00:20:40.378 }, 00:20:40.378 "method": "bdev_xnvme_create" 00:20:40.378 }, 00:20:40.378 { 00:20:40.378 "method": "bdev_wait_for_examine" 00:20:40.378 } 00:20:40.378 ] 00:20:40.378 } 00:20:40.378 ] 00:20:40.378 } 00:20:40.378 [2024-11-26 18:24:33.521966] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:20:40.378 [2024-11-26 18:24:33.522134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72929 ] 00:20:40.378 [2024-11-26 18:24:33.695496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.637 [2024-11-26 18:24:33.801886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.896 Running I/O for 5 seconds... 00:20:42.810 25856.00 IOPS, 101.00 MiB/s [2024-11-26T18:24:37.526Z] 25536.00 IOPS, 99.75 MiB/s [2024-11-26T18:24:38.461Z] 25045.33 IOPS, 97.83 MiB/s [2024-11-26T18:24:39.399Z] 25312.00 IOPS, 98.88 MiB/s [2024-11-26T18:24:39.399Z] 25292.80 IOPS, 98.80 MiB/s 00:20:46.064 Latency(us) 00:20:46.064 [2024-11-26T18:24:39.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.064 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:46.064 xnvme_bdev : 5.01 25251.27 98.64 0.00 0.00 2524.24 1073.19 12076.94 00:20:46.064 [2024-11-26T18:24:39.399Z] =================================================================================================================== 00:20:46.064 [2024-11-26T18:24:39.399Z] Total : 25251.27 98.64 0.00 0.00 2524.24 1073.19 12076.94 00:20:47.003 18:24:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:47.003 18:24:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:47.003 18:24:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:47.003 18:24:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:47.003 18:24:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:47.003 { 00:20:47.003 "subsystems": [ 00:20:47.003 { 00:20:47.003 "subsystem": "bdev", 00:20:47.003 "config": [ 00:20:47.003 { 00:20:47.003 "params": { 00:20:47.003 "io_mechanism": "io_uring_cmd", 00:20:47.003 "conserve_cpu": false, 00:20:47.003 "filename": "/dev/ng0n1", 00:20:47.003 "name": "xnvme_bdev" 00:20:47.003 }, 00:20:47.003 "method": "bdev_xnvme_create" 00:20:47.003 }, 00:20:47.003 { 00:20:47.003 "method": "bdev_wait_for_examine" 00:20:47.003 } 00:20:47.003 ] 00:20:47.003 } 00:20:47.003 ] 00:20:47.003 } 00:20:47.003 [2024-11-26 18:24:40.273950] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:20:47.003 [2024-11-26 18:24:40.274154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73005 ] 00:20:47.263 [2024-11-26 18:24:40.444746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.263 [2024-11-26 18:24:40.563156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.832 Running I/O for 5 seconds... 00:20:49.711 26304.00 IOPS, 102.75 MiB/s [2024-11-26T18:24:43.986Z] 25248.00 IOPS, 98.62 MiB/s [2024-11-26T18:24:44.920Z] 25578.67 IOPS, 99.92 MiB/s [2024-11-26T18:24:46.298Z] 25344.00 IOPS, 99.00 MiB/s 00:20:52.963 Latency(us) 00:20:52.963 [2024-11-26T18:24:46.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.963 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:52.963 xnvme_bdev : 5.01 25124.09 98.14 0.00 0.00 2536.74 1058.88 8986.16 00:20:52.963 [2024-11-26T18:24:46.298Z] =================================================================================================================== 00:20:52.963 [2024-11-26T18:24:46.298Z] Total : 25124.09 98.14 0.00 0.00 2536.74 1058.88 8986.16 00:20:53.900 18:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:53.900 18:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:20:53.900 18:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:53.900 18:24:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:53.900 18:24:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:53.900 { 00:20:53.900 "subsystems": [ 00:20:53.900 { 00:20:53.900 "subsystem": "bdev", 00:20:53.900 "config": [ 00:20:53.900 { 00:20:53.900 "params": { 00:20:53.900 "io_mechanism": "io_uring_cmd", 00:20:53.900 "conserve_cpu": false, 00:20:53.900 "filename": "/dev/ng0n1", 00:20:53.900 "name": "xnvme_bdev" 00:20:53.900 }, 00:20:53.900 "method": "bdev_xnvme_create" 00:20:53.900 }, 00:20:53.900 { 00:20:53.900 "method": "bdev_wait_for_examine" 00:20:53.900 } 00:20:53.900 ] 00:20:53.900 } 00:20:53.900 ] 00:20:53.900 } 00:20:53.900 [2024-11-26 18:24:47.043470] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:20:53.900 [2024-11-26 18:24:47.043580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73085 ] 00:20:53.900 [2024-11-26 18:24:47.213545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.161 [2024-11-26 18:24:47.322745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.421 Running I/O for 5 seconds... 00:20:56.737 85568.00 IOPS, 334.25 MiB/s [2024-11-26T18:24:51.037Z] 83456.00 IOPS, 326.00 MiB/s [2024-11-26T18:24:51.999Z] 83946.67 IOPS, 327.92 MiB/s [2024-11-26T18:24:52.938Z] 84400.00 IOPS, 329.69 MiB/s 00:20:59.603 Latency(us) 00:20:59.603 [2024-11-26T18:24:52.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.603 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:20:59.603 xnvme_bdev : 5.00 82861.90 323.68 0.00 0.00 769.75 296.92 2518.41 00:20:59.603 [2024-11-26T18:24:52.938Z] =================================================================================================================== 00:20:59.603 [2024-11-26T18:24:52.938Z] Total : 82861.90 323.68 0.00 0.00 769.75 296.92 2518.41 00:21:00.540 18:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:00.540 18:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:00.540 18:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:21:00.540 18:24:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:00.540 18:24:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:00.540 { 00:21:00.540 "subsystems": [ 00:21:00.540 { 00:21:00.540 "subsystem": "bdev", 00:21:00.540 "config": [ 00:21:00.540 { 00:21:00.540 "params": { 00:21:00.540 "io_mechanism": "io_uring_cmd", 00:21:00.540 "conserve_cpu": false, 00:21:00.540 "filename": "/dev/ng0n1", 00:21:00.540 "name": "xnvme_bdev" 00:21:00.540 }, 00:21:00.540 "method": "bdev_xnvme_create" 00:21:00.540 }, 00:21:00.540 { 00:21:00.540 "method": "bdev_wait_for_examine" 00:21:00.540 } 00:21:00.540 ] 00:21:00.540 } 00:21:00.540 ] 00:21:00.540 } 00:21:00.540 [2024-11-26 18:24:53.805979] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:21:00.540 [2024-11-26 18:24:53.806198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73159 ] 00:21:00.798 [2024-11-26 18:24:53.977675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.798 [2024-11-26 18:24:54.088206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.365 Running I/O for 5 seconds... 00:21:03.229 64660.00 IOPS, 252.58 MiB/s [2024-11-26T18:24:57.497Z] 65240.50 IOPS, 254.85 MiB/s [2024-11-26T18:24:58.514Z] 65359.67 IOPS, 255.31 MiB/s [2024-11-26T18:24:59.482Z] 64475.25 IOPS, 251.86 MiB/s [2024-11-26T18:24:59.482Z] 59441.60 IOPS, 232.19 MiB/s 00:21:06.147 Latency(us) 00:21:06.147 [2024-11-26T18:24:59.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.147 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:21:06.147 xnvme_bdev : 5.01 59325.36 231.74 0.00 0.00 1075.14 165.45 9444.05 00:21:06.147 [2024-11-26T18:24:59.482Z] =================================================================================================================== 00:21:06.147 [2024-11-26T18:24:59.482Z] Total : 59325.36 231.74 0.00 0.00 1075.14 165.45 9444.05 00:21:07.525 00:21:07.525 real 0m27.074s 00:21:07.525 user 0m15.695s 00:21:07.525 sys 0m10.953s 00:21:07.525 18:25:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.525 18:25:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:07.525 ************************************ 00:21:07.525 END TEST xnvme_bdevperf 00:21:07.525 ************************************ 00:21:07.525 18:25:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:07.525 18:25:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:07.525 18:25:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.525 18:25:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:07.525 ************************************ 00:21:07.525 START TEST xnvme_fio_plugin 00:21:07.525 ************************************ 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:07.525 18:25:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:07.525 { 00:21:07.525 "subsystems": [ 00:21:07.525 { 00:21:07.525 "subsystem": "bdev", 00:21:07.525 "config": [ 00:21:07.525 { 00:21:07.525 "params": { 00:21:07.525 "io_mechanism": "io_uring_cmd", 00:21:07.525 "conserve_cpu": false, 00:21:07.525 "filename": "/dev/ng0n1", 00:21:07.525 "name": "xnvme_bdev" 00:21:07.525 }, 00:21:07.525 "method": "bdev_xnvme_create" 00:21:07.525 }, 00:21:07.525 { 00:21:07.525 "method": "bdev_wait_for_examine" 00:21:07.525 } 00:21:07.525 ] 00:21:07.525 } 00:21:07.525 ] 00:21:07.525 } 00:21:07.525 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:07.525 fio-3.35 00:21:07.526 Starting 1 thread 00:21:14.100 00:21:14.100 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73283: Tue Nov 26 18:25:06 2024 00:21:14.100 read: IOPS=23.7k, BW=92.7MiB/s (97.2MB/s)(464MiB/5002msec) 00:21:14.100 slat (nsec): min=6101, max=78965, avg=9935.82, stdev=2656.74 00:21:14.100 clat (usec): min=1225, max=6832, avg=2308.56, stdev=249.81 00:21:14.100 lat (usec): min=1235, max=6846, avg=2318.49, stdev=250.18 00:21:14.100 clat percentiles (usec): 00:21:14.100 | 1.00th=[ 1860], 5.00th=[ 1958], 10.00th=[ 2008], 20.00th=[ 2089], 00:21:14.100 | 30.00th=[ 2147], 40.00th=[ 2245], 50.00th=[ 2311], 60.00th=[ 2376], 00:21:14.100 | 70.00th=[ 2442], 80.00th=[ 2540], 90.00th=[ 2638], 95.00th=[ 2671], 00:21:14.100 | 99.00th=[ 2802], 99.50th=[ 2835], 99.90th=[ 2933], 99.95th=[ 6128], 00:21:14.100 | 99.99th=[ 6718] 00:21:14.100 bw ( KiB/s): min=93184, max=96256, per=100.00%, avg=95004.44, stdev=961.66, samples=9 00:21:14.100 iops : min=23296, max=24064, avg=23751.11, stdev=240.41, samples=9 00:21:14.100 lat (msec) : 2=9.41%, 4=90.53%, 10=0.05% 00:21:14.100 cpu : usr=46.99%, sys=51.71%, ctx=13, majf=0, minf=762 00:21:14.100 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:14.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.100 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:21:14.100 issued rwts: total=118757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:14.100 00:21:14.100 Run status group 0 (all jobs): 00:21:14.101 READ: bw=92.7MiB/s (97.2MB/s), 92.7MiB/s-92.7MiB/s (97.2MB/s-97.2MB/s), io=464MiB (486MB), run=5002-5002msec 00:21:14.670 ----------------------------------------------------- 00:21:14.670 Suppressions used: 00:21:14.670 count bytes template 00:21:14.670 1 11 /usr/src/fio/parse.c 00:21:14.670 1 8 libtcmalloc_minimal.so 00:21:14.670 1 904 libcrypto.so 00:21:14.670 ----------------------------------------------------- 00:21:14.670 00:21:14.930 18:25:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:14.930 18:25:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:14.930 18:25:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:14.930 18:25:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:14.930 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:14.930 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:14.930 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:14.930 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:14.930 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:14.931 18:25:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:14.931 { 00:21:14.931 "subsystems": [ 00:21:14.931 { 00:21:14.931 "subsystem": "bdev", 00:21:14.931 "config": [ 00:21:14.931 { 00:21:14.931 "params": { 00:21:14.931 "io_mechanism": "io_uring_cmd", 00:21:14.931 "conserve_cpu": false, 00:21:14.931 "filename": "/dev/ng0n1", 00:21:14.931 "name": "xnvme_bdev" 00:21:14.931 }, 00:21:14.931 "method": "bdev_xnvme_create" 00:21:14.931 }, 00:21:14.931 { 00:21:14.931 "method": "bdev_wait_for_examine" 00:21:14.931 } 00:21:14.931 ] 00:21:14.931 } 00:21:14.931 ] 00:21:14.931 } 00:21:15.191 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:15.191 fio-3.35 00:21:15.191 Starting 1 thread 00:21:21.764 00:21:21.764 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73378: Tue Nov 26 18:25:14 2024 00:21:21.764 write: IOPS=24.3k, BW=95.1MiB/s (99.7MB/s)(475MiB/5002msec); 0 zone resets 00:21:21.764 slat (usec): min=2, max=179, avg= 9.94, stdev= 3.52 00:21:21.765 clat (usec): min=463, max=12086, avg=2248.50, stdev=473.50 00:21:21.765 lat (usec): min=466, max=12096, avg=2258.43, stdev=475.35 00:21:21.765 clat percentiles (usec): 00:21:21.765 | 1.00th=[ 750], 5.00th=[ 1045], 10.00th=[ 1532], 20.00th=[ 2073], 00:21:21.765 | 30.00th=[ 2180], 40.00th=[ 2245], 50.00th=[ 2343], 60.00th=[ 2409], 00:21:21.765 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2704], 95.00th=[ 2769], 00:21:21.765 | 99.00th=[ 2900], 99.50th=[ 2933], 99.90th=[ 3195], 99.95th=[ 3654], 00:21:21.765 | 99.99th=[ 3916] 00:21:21.765 bw ( KiB/s): min=90624, max=152328, per=100.00%, avg=98232.89, stdev=20290.69, samples=9 00:21:21.765 iops : min=22656, max=38082, avg=24558.22, stdev=5072.67, samples=9 00:21:21.765 lat (usec) : 500=0.01%, 750=0.99%, 1000=3.08% 00:21:21.765 lat (msec) : 2=10.20%, 4=85.71%, 10=0.01%, 20=0.01% 00:21:21.765 cpu : usr=48.57%, sys=50.13%, ctx=61, majf=0, minf=763 00:21:21.765 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.0%, 16=24.0%, 32=51.9%, >=64=1.6% 00:21:21.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.765 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:21:21.765 issued rwts: total=0,121715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.765 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:21.765 00:21:21.765 Run status group 0 (all jobs): 00:21:21.765 WRITE: bw=95.1MiB/s (99.7MB/s), 95.1MiB/s-95.1MiB/s (99.7MB/s-99.7MB/s), io=475MiB (499MB), run=5002-5002msec 00:21:22.331 ----------------------------------------------------- 00:21:22.331 Suppressions used: 00:21:22.331 count bytes template 00:21:22.331 1 11 /usr/src/fio/parse.c 00:21:22.331 1 8 libtcmalloc_minimal.so 00:21:22.331 1 904 libcrypto.so 00:21:22.331 ----------------------------------------------------- 00:21:22.331 00:21:22.331 00:21:22.331 real 0m15.072s 00:21:22.331 user 0m8.904s 00:21:22.331 sys 0m5.780s 00:21:22.331 ************************************ 00:21:22.331 END TEST xnvme_fio_plugin 00:21:22.331 ************************************ 00:21:22.331 18:25:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.331 18:25:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:22.589 18:25:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:22.589 18:25:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:21:22.589 18:25:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:21:22.589 18:25:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:22.589 18:25:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:22.589 18:25:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.589 18:25:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:22.589 ************************************ 00:21:22.589 START TEST xnvme_rpc 00:21:22.589 ************************************ 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73465 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73465 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73465 ']' 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.589 18:25:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:22.589 [2024-11-26 18:25:15.822554] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:21:22.589 [2024-11-26 18:25:15.822751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73465 ] 00:21:22.848 [2024-11-26 18:25:15.999005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.848 [2024-11-26 18:25:16.112103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:23.785 xnvme_bdev 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:23.785 18:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:21:23.785 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73465 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73465 ']' 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73465 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73465 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.043 killing process with pid 73465 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73465' 00:21:24.043 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73465 00:21:24.044 18:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73465 00:21:26.584 00:21:26.584 real 0m3.686s 00:21:26.584 user 0m3.781s 00:21:26.584 sys 0m0.496s 00:21:26.584 ************************************ 00:21:26.584 END TEST xnvme_rpc 00:21:26.584 ************************************ 00:21:26.584 18:25:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.584 18:25:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:26.584 18:25:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:26.584 18:25:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:26.584 18:25:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.584 18:25:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:26.584 ************************************ 00:21:26.584 START TEST xnvme_bdevperf 00:21:26.584 ************************************ 00:21:26.584 18:25:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:26.584 18:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:26.584 18:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:21:26.584 18:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:26.584 18:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:26.584 18:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:26.584 18:25:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:26.584 18:25:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:26.584 { 00:21:26.584 "subsystems": [ 00:21:26.584 { 00:21:26.584 "subsystem": "bdev", 00:21:26.584 "config": [ 00:21:26.584 { 00:21:26.584 "params": { 00:21:26.584 "io_mechanism": "io_uring_cmd", 00:21:26.584 "conserve_cpu": true, 00:21:26.584 "filename": "/dev/ng0n1", 00:21:26.584 "name": "xnvme_bdev" 00:21:26.584 }, 00:21:26.584 "method": "bdev_xnvme_create" 00:21:26.584 }, 00:21:26.584 { 00:21:26.584 "method": "bdev_wait_for_examine" 00:21:26.584 } 00:21:26.584 ] 00:21:26.584 } 00:21:26.584 ] 00:21:26.584 } 00:21:26.584 [2024-11-26 18:25:19.575812] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:21:26.584 [2024-11-26 18:25:19.576030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73544 ] 00:21:26.584 [2024-11-26 18:25:19.757410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.584 [2024-11-26 18:25:19.864835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.159 Running I/O for 5 seconds... 00:21:29.037 25152.00 IOPS, 98.25 MiB/s [2024-11-26T18:25:23.310Z] 24544.00 IOPS, 95.88 MiB/s [2024-11-26T18:25:24.247Z] 24298.67 IOPS, 94.92 MiB/s [2024-11-26T18:25:25.627Z] 24224.00 IOPS, 94.62 MiB/s [2024-11-26T18:25:25.627Z] 24115.20 IOPS, 94.20 MiB/s 00:21:32.292 Latency(us) 00:21:32.292 [2024-11-26T18:25:25.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.292 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:21:32.292 xnvme_bdev : 5.01 24069.17 94.02 0.00 0.00 2648.51 897.90 12248.65 00:21:32.292 [2024-11-26T18:25:25.627Z] =================================================================================================================== 00:21:32.292 [2024-11-26T18:25:25.627Z] Total : 24069.17 94.02 0.00 0.00 2648.51 897.90 12248.65 00:21:33.229 18:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:33.229 18:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:21:33.229 18:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:33.229 18:25:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:33.229 18:25:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:33.229 { 00:21:33.229 "subsystems": [ 00:21:33.229 { 00:21:33.229 "subsystem": "bdev", 00:21:33.229 "config": [ 00:21:33.229 { 00:21:33.229 "params": { 00:21:33.229 "io_mechanism": "io_uring_cmd", 00:21:33.229 "conserve_cpu": true, 00:21:33.229 "filename": "/dev/ng0n1", 00:21:33.229 "name": "xnvme_bdev" 00:21:33.229 }, 00:21:33.229 "method": "bdev_xnvme_create" 00:21:33.229 }, 00:21:33.229 { 00:21:33.229 "method": "bdev_wait_for_examine" 00:21:33.229 } 00:21:33.229 ] 00:21:33.229 } 00:21:33.230 ] 00:21:33.230 } 00:21:33.230 [2024-11-26 18:25:26.347391] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:21:33.230 [2024-11-26 18:25:26.347598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73624 ] 00:21:33.230 [2024-11-26 18:25:26.530335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.489 [2024-11-26 18:25:26.633630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.748 Running I/O for 5 seconds... 00:21:36.102 24576.00 IOPS, 96.00 MiB/s [2024-11-26T18:25:30.007Z] 23968.00 IOPS, 93.62 MiB/s [2024-11-26T18:25:31.383Z] 23744.00 IOPS, 92.75 MiB/s [2024-11-26T18:25:32.323Z] 23648.00 IOPS, 92.38 MiB/s 00:21:38.988 Latency(us) 00:21:38.988 [2024-11-26T18:25:32.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.988 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:38.988 xnvme_bdev : 5.01 23561.56 92.04 0.00 0.00 2705.18 930.10 9558.53 00:21:38.988 [2024-11-26T18:25:32.323Z] =================================================================================================================== 00:21:38.988 [2024-11-26T18:25:32.323Z] Total : 23561.56 92.04 0.00 0.00 2705.18 930.10 9558.53 00:21:39.926 18:25:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:39.926 18:25:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:21:39.926 18:25:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:39.926 18:25:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:39.926 18:25:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:39.926 { 00:21:39.927 "subsystems": [ 00:21:39.927 { 00:21:39.927 "subsystem": "bdev", 00:21:39.927 "config": [ 00:21:39.927 { 00:21:39.927 "params": { 00:21:39.927 "io_mechanism": "io_uring_cmd", 00:21:39.927 "conserve_cpu": true, 00:21:39.927 "filename": "/dev/ng0n1", 00:21:39.927 "name": "xnvme_bdev" 00:21:39.927 }, 00:21:39.927 "method": "bdev_xnvme_create" 00:21:39.927 }, 00:21:39.927 { 00:21:39.927 "method": "bdev_wait_for_examine" 00:21:39.927 } 00:21:39.927 ] 00:21:39.927 } 00:21:39.927 ] 00:21:39.927 } 00:21:39.927 [2024-11-26 18:25:33.161662] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:21:39.927 [2024-11-26 18:25:33.161801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73704 ] 00:21:40.197 [2024-11-26 18:25:33.345121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.197 [2024-11-26 18:25:33.456629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.764 Running I/O for 5 seconds... 00:21:42.648 78720.00 IOPS, 307.50 MiB/s [2024-11-26T18:25:36.920Z] 78816.00 IOPS, 307.88 MiB/s [2024-11-26T18:25:37.858Z] 78826.67 IOPS, 307.92 MiB/s [2024-11-26T18:25:38.796Z] 78784.00 IOPS, 307.75 MiB/s [2024-11-26T18:25:38.796Z] 78848.00 IOPS, 308.00 MiB/s 00:21:45.461 Latency(us) 00:21:45.461 [2024-11-26T18:25:38.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.462 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:21:45.462 xnvme_bdev : 5.00 78831.99 307.94 0.00 0.00 809.24 579.52 3477.13 00:21:45.462 [2024-11-26T18:25:38.797Z] =================================================================================================================== 00:21:45.462 [2024-11-26T18:25:38.797Z] Total : 78831.99 307.94 0.00 0.00 809.24 579.52 3477.13 00:21:46.860 18:25:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:46.860 18:25:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:21:46.860 18:25:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:46.860 18:25:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:46.860 18:25:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:46.860 { 00:21:46.860 "subsystems": [ 00:21:46.860 { 00:21:46.860 "subsystem": "bdev", 00:21:46.860 "config": [ 00:21:46.860 { 00:21:46.860 "params": { 00:21:46.860 "io_mechanism": "io_uring_cmd", 00:21:46.860 "conserve_cpu": true, 00:21:46.860 "filename": "/dev/ng0n1", 00:21:46.860 "name": "xnvme_bdev" 00:21:46.860 }, 00:21:46.860 "method": "bdev_xnvme_create" 00:21:46.860 }, 00:21:46.860 { 00:21:46.860 "method": "bdev_wait_for_examine" 00:21:46.860 } 00:21:46.860 ] 00:21:46.860 } 00:21:46.860 ] 00:21:46.860 } 00:21:46.860 [2024-11-26 18:25:39.971205] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:21:46.860 [2024-11-26 18:25:39.971392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73774 ] 00:21:46.860 [2024-11-26 18:25:40.153599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.120 [2024-11-26 18:25:40.268584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.380 Running I/O for 5 seconds... 00:21:49.696 40007.00 IOPS, 156.28 MiB/s [2024-11-26T18:25:43.967Z] 43894.50 IOPS, 171.46 MiB/s [2024-11-26T18:25:44.903Z] 45275.67 IOPS, 176.86 MiB/s [2024-11-26T18:25:45.840Z] 46353.00 IOPS, 181.07 MiB/s 00:21:52.505 Latency(us) 00:21:52.505 [2024-11-26T18:25:45.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.505 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:21:52.505 xnvme_bdev : 5.00 45561.73 177.98 0.00 0.00 1397.80 73.78 15568.38 00:21:52.505 [2024-11-26T18:25:45.840Z] =================================================================================================================== 00:21:52.505 [2024-11-26T18:25:45.840Z] Total : 45561.73 177.98 0.00 0.00 1397.80 73.78 15568.38 00:21:53.442 00:21:53.442 real 0m27.176s 00:21:53.442 user 0m16.992s 00:21:53.442 sys 0m7.956s 00:21:53.442 18:25:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.442 18:25:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:53.442 ************************************ 00:21:53.442 END TEST xnvme_bdevperf 00:21:53.442 ************************************ 00:21:53.442 18:25:46 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:53.442 18:25:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.442 18:25:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.442 18:25:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:53.442 ************************************ 00:21:53.442 START TEST xnvme_fio_plugin 00:21:53.442 ************************************ 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:53.442 18:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:53.702 { 00:21:53.702 "subsystems": [ 00:21:53.702 { 00:21:53.702 "subsystem": "bdev", 00:21:53.702 "config": [ 00:21:53.702 { 00:21:53.702 "params": { 00:21:53.702 "io_mechanism": "io_uring_cmd", 00:21:53.702 "conserve_cpu": true, 00:21:53.702 "filename": "/dev/ng0n1", 00:21:53.702 "name": "xnvme_bdev" 00:21:53.702 }, 00:21:53.702 "method": "bdev_xnvme_create" 00:21:53.702 }, 00:21:53.702 { 00:21:53.702 "method": "bdev_wait_for_examine" 00:21:53.702 } 00:21:53.702 ] 00:21:53.702 } 00:21:53.702 ] 00:21:53.702 } 00:21:53.702 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:53.702 fio-3.35 00:21:53.702 Starting 1 thread 00:22:00.277 00:22:00.277 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73898: Tue Nov 26 18:25:52 2024 00:22:00.277 read: IOPS=34.7k, BW=136MiB/s (142MB/s)(678MiB/5001msec) 00:22:00.277 slat (nsec): min=2197, max=92710, avg=6657.20, stdev=4126.82 00:22:00.277 clat (usec): min=673, max=7175, avg=1583.66, stdev=717.38 00:22:00.277 lat (usec): min=676, max=7188, avg=1590.31, stdev=720.88 00:22:00.277 clat percentiles (usec): 00:22:00.277 | 1.00th=[ 742], 5.00th=[ 775], 10.00th=[ 799], 20.00th=[ 848], 00:22:00.277 | 30.00th=[ 889], 40.00th=[ 938], 50.00th=[ 1778], 60.00th=[ 2040], 00:22:00.277 | 70.00th=[ 2180], 80.00th=[ 2343], 90.00th=[ 2474], 95.00th=[ 2573], 00:22:00.277 | 99.00th=[ 2704], 99.50th=[ 2737], 99.90th=[ 2933], 99.95th=[ 3064], 00:22:00.277 | 99.99th=[ 6980] 00:22:00.277 bw ( KiB/s): min=94720, max=263168, per=100.00%, avg=143644.44, stdev=70316.22, samples=9 00:22:00.277 iops : min=23680, max=65792, avg=35911.11, stdev=17579.06, samples=9 00:22:00.277 lat (usec) : 750=1.94%, 1000=43.52% 00:22:00.277 lat (msec) : 2=12.04%, 4=42.46%, 10=0.04% 00:22:00.277 cpu : usr=45.32%, sys=51.84%, ctx=10, majf=0, minf=762 00:22:00.277 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:00.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.277 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:00.277 issued rwts: total=173632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.277 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:00.277 00:22:00.277 Run status group 0 (all jobs): 00:22:00.277 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=678MiB (711MB), run=5001-5001msec 00:22:00.847 ----------------------------------------------------- 00:22:00.847 Suppressions used: 00:22:00.847 count bytes template 00:22:00.847 1 11 /usr/src/fio/parse.c 00:22:00.847 1 8 libtcmalloc_minimal.so 00:22:00.847 1 904 libcrypto.so 00:22:00.847 ----------------------------------------------------- 00:22:00.847 00:22:00.847 18:25:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:00.847 18:25:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:00.847 18:25:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:00.847 18:25:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:00.847 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:00.847 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:00.847 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:00.847 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.847 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:00.848 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:00.848 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:00.848 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:00.848 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.848 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:00.848 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:00.848 18:25:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:00.848 18:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:00.848 18:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:00.848 18:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:00.848 18:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:00.848 18:25:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:00.848 { 00:22:00.848 "subsystems": [ 00:22:00.848 { 00:22:00.848 "subsystem": "bdev", 00:22:00.848 "config": [ 00:22:00.848 { 00:22:00.848 "params": { 00:22:00.848 "io_mechanism": "io_uring_cmd", 00:22:00.848 "conserve_cpu": true, 00:22:00.848 "filename": "/dev/ng0n1", 00:22:00.848 "name": "xnvme_bdev" 00:22:00.848 }, 00:22:00.848 "method": "bdev_xnvme_create" 00:22:00.848 }, 00:22:00.848 { 00:22:00.848 "method": "bdev_wait_for_examine" 00:22:00.848 } 00:22:00.848 ] 00:22:00.848 } 00:22:00.848 ] 00:22:00.848 } 00:22:01.108 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:01.108 fio-3.35 00:22:01.108 Starting 1 thread 00:22:07.695 00:22:07.695 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73989: Tue Nov 26 18:25:59 2024 00:22:07.695 write: IOPS=23.3k, BW=91.2MiB/s (95.6MB/s)(456MiB/5002msec); 0 zone resets 00:22:07.695 slat (nsec): min=5035, max=83764, avg=10543.19, stdev=3111.44 00:22:07.695 clat (usec): min=1324, max=3335, avg=2329.29, stdev=243.39 00:22:07.695 lat (usec): min=1330, max=3350, avg=2339.83, stdev=243.85 00:22:07.695 clat percentiles (usec): 00:22:07.695 | 1.00th=[ 1827], 5.00th=[ 1958], 10.00th=[ 2008], 20.00th=[ 2114], 00:22:07.696 | 30.00th=[ 2180], 40.00th=[ 2245], 50.00th=[ 2311], 60.00th=[ 2409], 00:22:07.696 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 2638], 95.00th=[ 2704], 00:22:07.696 | 99.00th=[ 2802], 99.50th=[ 2868], 99.90th=[ 3064], 99.95th=[ 3130], 00:22:07.696 | 99.99th=[ 3261] 00:22:07.696 bw ( KiB/s): min=92160, max=95041, per=100.00%, avg=93390.33, stdev=986.40, samples=9 00:22:07.696 iops : min=23040, max=23760, avg=23347.56, stdev=246.55, samples=9 00:22:07.696 lat (msec) : 2=8.53%, 4=91.47% 00:22:07.696 cpu : usr=47.89%, sys=49.07%, ctx=6, majf=0, minf=763 00:22:07.696 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:07.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.696 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:07.696 issued rwts: total=0,116732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.696 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.696 00:22:07.696 Run status group 0 (all jobs): 00:22:07.696 WRITE: bw=91.2MiB/s (95.6MB/s), 91.2MiB/s-91.2MiB/s (95.6MB/s-95.6MB/s), io=456MiB (478MB), run=5002-5002msec 00:22:07.955 ----------------------------------------------------- 00:22:07.955 Suppressions used: 00:22:07.955 count bytes template 00:22:07.955 1 11 /usr/src/fio/parse.c 00:22:07.955 1 8 libtcmalloc_minimal.so 00:22:07.955 1 904 libcrypto.so 00:22:07.955 ----------------------------------------------------- 00:22:07.955 00:22:07.955 00:22:07.955 real 0m14.524s 00:22:07.955 user 0m8.397s 00:22:07.955 sys 0m5.589s 00:22:07.955 18:26:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.955 18:26:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:07.955 ************************************ 00:22:07.955 END TEST xnvme_fio_plugin 00:22:07.955 ************************************ 00:22:08.214 Process with pid 73465 is not found 00:22:08.214 18:26:01 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73465 00:22:08.214 18:26:01 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73465 ']' 00:22:08.214 18:26:01 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73465 00:22:08.214 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73465) - No such process 00:22:08.214 18:26:01 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73465 is not found' 00:22:08.214 00:22:08.214 real 3m52.317s 00:22:08.214 user 2m14.288s 00:22:08.214 sys 1m23.985s 00:22:08.214 18:26:01 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.214 18:26:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:08.214 ************************************ 00:22:08.214 END TEST nvme_xnvme 00:22:08.214 ************************************ 00:22:08.214 18:26:01 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:08.214 18:26:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:08.214 18:26:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.214 18:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:08.214 ************************************ 00:22:08.214 START TEST blockdev_xnvme 00:22:08.214 ************************************ 00:22:08.214 18:26:01 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:08.214 * Looking for test storage... 00:22:08.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:08.214 18:26:01 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:08.214 18:26:01 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:22:08.214 18:26:01 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:08.474 18:26:01 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:08.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.474 --rc genhtml_branch_coverage=1 00:22:08.474 --rc genhtml_function_coverage=1 00:22:08.474 --rc genhtml_legend=1 00:22:08.474 --rc geninfo_all_blocks=1 00:22:08.474 --rc geninfo_unexecuted_blocks=1 00:22:08.474 00:22:08.474 ' 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:08.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.474 --rc genhtml_branch_coverage=1 00:22:08.474 --rc genhtml_function_coverage=1 00:22:08.474 --rc genhtml_legend=1 00:22:08.474 --rc geninfo_all_blocks=1 00:22:08.474 --rc geninfo_unexecuted_blocks=1 00:22:08.474 00:22:08.474 ' 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:08.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.474 --rc genhtml_branch_coverage=1 00:22:08.474 --rc genhtml_function_coverage=1 00:22:08.474 --rc genhtml_legend=1 00:22:08.474 --rc geninfo_all_blocks=1 00:22:08.474 --rc geninfo_unexecuted_blocks=1 00:22:08.474 00:22:08.474 ' 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:08.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.474 --rc genhtml_branch_coverage=1 00:22:08.474 --rc genhtml_function_coverage=1 00:22:08.474 --rc genhtml_legend=1 00:22:08.474 --rc geninfo_all_blocks=1 00:22:08.474 --rc geninfo_unexecuted_blocks=1 00:22:08.474 00:22:08.474 ' 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74126 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:08.474 18:26:01 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74126 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74126 ']' 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.474 18:26:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:08.474 [2024-11-26 18:26:01.696354] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:22:08.474 [2024-11-26 18:26:01.696546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74126 ] 00:22:08.734 [2024-11-26 18:26:01.869722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.734 [2024-11-26 18:26:01.984355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.672 18:26:02 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.672 18:26:02 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:22:09.672 18:26:02 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:22:09.672 18:26:02 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:22:09.672 18:26:02 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:22:09.672 18:26:02 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:22:09.672 18:26:02 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:10.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:10.810 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:10.810 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:10.810 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:22:11.070 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:22:11.070 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0c0n1 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0c0n1 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:22:11.070 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:11.070 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:11.070 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:22:11.070 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:11.070 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:11.070 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:11.070 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n2 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n3 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring -c' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:22:11.071 nvme0n1 00:22:11.071 nvme1n1 00:22:11.071 nvme1n2 00:22:11.071 nvme1n3 00:22:11.071 nvme2n1 00:22:11.071 nvme3n1 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:11.071 18:26:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:22:11.071 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:22:11.333 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "bf4b1a0b-6652-479c-a7a1-605a1f7b9adf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bf4b1a0b-6652-479c-a7a1-605a1f7b9adf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1deae91d-0142-401b-94d5-27fcb5f41646"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1deae91d-0142-401b-94d5-27fcb5f41646",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "77e823ac-fcb2-42a5-b8d9-58d68256fc0a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "77e823ac-fcb2-42a5-b8d9-58d68256fc0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "48fd55ea-ed19-42fc-845f-b76e6e7740f3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "48fd55ea-ed19-42fc-845f-b76e6e7740f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ac367b75-51c9-4097-a803-e4ed59b7ba65"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ac367b75-51c9-4097-a803-e4ed59b7ba65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "73d2dc39-62f0-4b0f-a113-7d25d89fe876"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "73d2dc39-62f0-4b0f-a113-7d25d89fe876",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:22:11.333 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:22:11.333 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:22:11.333 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:22:11.333 18:26:04 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74126 00:22:11.333 18:26:04 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74126 ']' 00:22:11.333 18:26:04 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74126 00:22:11.333 18:26:04 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:22:11.333 18:26:04 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.333 18:26:04 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74126 00:22:11.333 killing process with pid 74126 00:22:11.333 18:26:04 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:11.333 18:26:04 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:11.333 18:26:04 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74126' 00:22:11.333 18:26:04 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74126 00:22:11.333 18:26:04 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74126 00:22:13.865 18:26:06 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:13.865 18:26:06 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:13.865 18:26:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:13.865 18:26:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.865 18:26:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:13.865 ************************************ 00:22:13.865 START TEST bdev_hello_world 00:22:13.865 ************************************ 00:22:13.865 18:26:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:13.865 [2024-11-26 18:26:06.843343] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:22:13.865 [2024-11-26 18:26:06.843577] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74422 ] 00:22:13.865 [2024-11-26 18:26:07.029607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.865 [2024-11-26 18:26:07.140554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.432 [2024-11-26 18:26:07.562819] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:14.432 [2024-11-26 18:26:07.562938] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:22:14.432 [2024-11-26 18:26:07.562987] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:14.432 [2024-11-26 18:26:07.564759] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:14.432 [2024-11-26 18:26:07.565062] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:14.432 [2024-11-26 18:26:07.565125] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:14.432 [2024-11-26 18:26:07.565331] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:14.432 00:22:14.432 [2024-11-26 18:26:07.565391] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:15.367 00:22:15.367 real 0m1.933s 00:22:15.367 user 0m1.574s 00:22:15.367 sys 0m0.244s 00:22:15.367 18:26:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.367 ************************************ 00:22:15.367 END TEST bdev_hello_world 00:22:15.367 ************************************ 00:22:15.367 18:26:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:15.626 18:26:08 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:22:15.626 18:26:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:15.626 18:26:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.626 18:26:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:15.626 ************************************ 00:22:15.626 START TEST bdev_bounds 00:22:15.626 ************************************ 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74459 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74459' 00:22:15.626 Process bdevio pid: 74459 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74459 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74459 ']' 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.626 18:26:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:15.626 [2024-11-26 18:26:08.841895] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:22:15.626 [2024-11-26 18:26:08.842110] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74459 ] 00:22:15.885 [2024-11-26 18:26:09.011441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:15.885 [2024-11-26 18:26:09.128257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.885 [2024-11-26 18:26:09.128420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.885 [2024-11-26 18:26:09.128459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.458 18:26:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.458 18:26:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:22:16.458 18:26:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:16.458 I/O targets: 00:22:16.458 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:22:16.458 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:16.458 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:16.458 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:16.458 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:22:16.458 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:22:16.458 00:22:16.458 00:22:16.458 CUnit - A unit testing framework for C - Version 2.1-3 00:22:16.458 http://cunit.sourceforge.net/ 00:22:16.458 00:22:16.458 00:22:16.458 Suite: bdevio tests on: nvme3n1 00:22:16.458 Test: blockdev write read block ...passed 00:22:16.458 Test: blockdev write zeroes read block ...passed 00:22:16.458 Test: blockdev write zeroes read no split ...passed 00:22:16.717 Test: blockdev write zeroes read split ...passed 00:22:16.717 Test: blockdev write zeroes read split partial ...passed 00:22:16.717 Test: blockdev reset ...passed 00:22:16.717 Test: blockdev write read 8 blocks ...passed 00:22:16.717 Test: blockdev write read size > 128k ...passed 00:22:16.717 Test: blockdev write read invalid size ...passed 00:22:16.717 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:16.717 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:16.717 Test: blockdev write read max offset ...passed 00:22:16.717 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:16.717 Test: blockdev writev readv 8 blocks ...passed 00:22:16.717 Test: blockdev writev readv 30 x 1block ...passed 00:22:16.717 Test: blockdev writev readv block ...passed 00:22:16.717 Test: blockdev writev readv size > 128k ...passed 00:22:16.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:16.717 Test: blockdev comparev and writev ...passed 00:22:16.717 Test: blockdev nvme passthru rw ...passed 00:22:16.717 Test: blockdev nvme passthru vendor specific ...passed 00:22:16.717 Test: blockdev nvme admin passthru ...passed 00:22:16.717 Test: blockdev copy ...passed 00:22:16.717 Suite: bdevio tests on: nvme2n1 00:22:16.717 Test: blockdev write read block ...passed 00:22:16.717 Test: blockdev write zeroes read block ...passed 00:22:16.717 Test: blockdev write zeroes read no split ...passed 00:22:16.717 Test: blockdev write zeroes read split ...passed 00:22:16.717 Test: blockdev write zeroes read split partial ...passed 00:22:16.717 Test: blockdev reset ...passed 00:22:16.717 Test: blockdev write read 8 blocks ...passed 00:22:16.717 Test: blockdev write read size > 128k ...passed 00:22:16.717 Test: blockdev write read invalid size ...passed 00:22:16.717 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:16.717 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:16.717 Test: blockdev write read max offset ...passed 00:22:16.717 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:16.717 Test: blockdev writev readv 8 blocks ...passed 00:22:16.717 Test: blockdev writev readv 30 x 1block ...passed 00:22:16.717 Test: blockdev writev readv block ...passed 00:22:16.717 Test: blockdev writev readv size > 128k ...passed 00:22:16.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:16.717 Test: blockdev comparev and writev ...passed 00:22:16.717 Test: blockdev nvme passthru rw ...passed 00:22:16.717 Test: blockdev nvme passthru vendor specific ...passed 00:22:16.717 Test: blockdev nvme admin passthru ...passed 00:22:16.717 Test: blockdev copy ...passed 00:22:16.717 Suite: bdevio tests on: nvme1n3 00:22:16.717 Test: blockdev write read block ...passed 00:22:16.717 Test: blockdev write zeroes read block ...passed 00:22:16.717 Test: blockdev write zeroes read no split ...passed 00:22:16.717 Test: blockdev write zeroes read split ...passed 00:22:16.717 Test: blockdev write zeroes read split partial ...passed 00:22:16.717 Test: blockdev reset ...passed 00:22:16.717 Test: blockdev write read 8 blocks ...passed 00:22:16.717 Test: blockdev write read size > 128k ...passed 00:22:16.717 Test: blockdev write read invalid size ...passed 00:22:16.717 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:16.717 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:16.717 Test: blockdev write read max offset ...passed 00:22:16.717 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:16.717 Test: blockdev writev readv 8 blocks ...passed 00:22:16.717 Test: blockdev writev readv 30 x 1block ...passed 00:22:16.717 Test: blockdev writev readv block ...passed 00:22:16.717 Test: blockdev writev readv size > 128k ...passed 00:22:16.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:16.717 Test: blockdev comparev and writev ...passed 00:22:16.717 Test: blockdev nvme passthru rw ...passed 00:22:16.717 Test: blockdev nvme passthru vendor specific ...passed 00:22:16.717 Test: blockdev nvme admin passthru ...passed 00:22:16.717 Test: blockdev copy ...passed 00:22:16.717 Suite: bdevio tests on: nvme1n2 00:22:16.717 Test: blockdev write read block ...passed 00:22:16.717 Test: blockdev write zeroes read block ...passed 00:22:16.717 Test: blockdev write zeroes read no split ...passed 00:22:16.976 Test: blockdev write zeroes read split ...passed 00:22:16.976 Test: blockdev write zeroes read split partial ...passed 00:22:16.976 Test: blockdev reset ...passed 00:22:16.976 Test: blockdev write read 8 blocks ...passed 00:22:16.977 Test: blockdev write read size > 128k ...passed 00:22:16.977 Test: blockdev write read invalid size ...passed 00:22:16.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:16.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:16.977 Test: blockdev write read max offset ...passed 00:22:16.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:16.977 Test: blockdev writev readv 8 blocks ...passed 00:22:16.977 Test: blockdev writev readv 30 x 1block ...passed 00:22:16.977 Test: blockdev writev readv block ...passed 00:22:16.977 Test: blockdev writev readv size > 128k ...passed 00:22:16.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:16.977 Test: blockdev comparev and writev ...passed 00:22:16.977 Test: blockdev nvme passthru rw ...passed 00:22:16.977 Test: blockdev nvme passthru vendor specific ...passed 00:22:16.977 Test: blockdev nvme admin passthru ...passed 00:22:16.977 Test: blockdev copy ...passed 00:22:16.977 Suite: bdevio tests on: nvme1n1 00:22:16.977 Test: blockdev write read block ...passed 00:22:16.977 Test: blockdev write zeroes read block ...passed 00:22:16.977 Test: blockdev write zeroes read no split ...passed 00:22:16.977 Test: blockdev write zeroes read split ...passed 00:22:16.977 Test: blockdev write zeroes read split partial ...passed 00:22:16.977 Test: blockdev reset ...passed 00:22:16.977 Test: blockdev write read 8 blocks ...passed 00:22:16.977 Test: blockdev write read size > 128k ...passed 00:22:16.977 Test: blockdev write read invalid size ...passed 00:22:16.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:16.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:16.977 Test: blockdev write read max offset ...passed 00:22:16.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:16.977 Test: blockdev writev readv 8 blocks ...passed 00:22:16.977 Test: blockdev writev readv 30 x 1block ...passed 00:22:16.977 Test: blockdev writev readv block ...passed 00:22:16.977 Test: blockdev writev readv size > 128k ...passed 00:22:16.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:16.977 Test: blockdev comparev and writev ...passed 00:22:16.977 Test: blockdev nvme passthru rw ...passed 00:22:16.977 Test: blockdev nvme passthru vendor specific ...passed 00:22:16.977 Test: blockdev nvme admin passthru ...passed 00:22:16.977 Test: blockdev copy ...passed 00:22:16.977 Suite: bdevio tests on: nvme0n1 00:22:16.977 Test: blockdev write read block ...passed 00:22:16.977 Test: blockdev write zeroes read block ...passed 00:22:16.977 Test: blockdev write zeroes read no split ...passed 00:22:16.977 Test: blockdev write zeroes read split ...passed 00:22:16.977 Test: blockdev write zeroes read split partial ...passed 00:22:16.977 Test: blockdev reset ...passed 00:22:16.977 Test: blockdev write read 8 blocks ...passed 00:22:16.977 Test: blockdev write read size > 128k ...passed 00:22:16.977 Test: blockdev write read invalid size ...passed 00:22:16.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:16.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:16.977 Test: blockdev write read max offset ...passed 00:22:16.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:16.977 Test: blockdev writev readv 8 blocks ...passed 00:22:16.977 Test: blockdev writev readv 30 x 1block ...passed 00:22:16.977 Test: blockdev writev readv block ...passed 00:22:16.977 Test: blockdev writev readv size > 128k ...passed 00:22:16.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:16.977 Test: blockdev comparev and writev ...passed 00:22:16.977 Test: blockdev nvme passthru rw ...passed 00:22:16.977 Test: blockdev nvme passthru vendor specific ...passed 00:22:16.977 Test: blockdev nvme admin passthru ...passed 00:22:16.977 Test: blockdev copy ...passed 00:22:16.977 00:22:16.977 Run Summary: Type Total Ran Passed Failed Inactive 00:22:16.977 suites 6 6 n/a 0 0 00:22:16.977 tests 138 138 138 0 0 00:22:16.977 asserts 780 780 780 0 n/a 00:22:16.977 00:22:16.977 Elapsed time = 1.437 seconds 00:22:16.977 0 00:22:16.977 18:26:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74459 00:22:16.977 18:26:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74459 ']' 00:22:16.977 18:26:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74459 00:22:16.977 18:26:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:22:17.235 18:26:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.235 18:26:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74459 00:22:17.235 18:26:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.235 18:26:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.235 18:26:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74459' 00:22:17.235 killing process with pid 74459 00:22:17.235 18:26:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74459 00:22:17.235 18:26:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74459 00:22:18.169 18:26:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:18.169 00:22:18.169 real 0m2.715s 00:22:18.169 user 0m6.796s 00:22:18.169 sys 0m0.377s 00:22:18.169 18:26:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.169 18:26:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:18.169 ************************************ 00:22:18.169 END TEST bdev_bounds 00:22:18.169 ************************************ 00:22:18.428 18:26:11 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:22:18.428 18:26:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:18.428 18:26:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.428 18:26:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:18.428 ************************************ 00:22:18.428 START TEST bdev_nbd 00:22:18.428 ************************************ 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74526 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74526 /var/tmp/spdk-nbd.sock 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74526 ']' 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:18.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:18.428 18:26:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:18.428 [2024-11-26 18:26:11.620525] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:22:18.428 [2024-11-26 18:26:11.620690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.686 [2024-11-26 18:26:11.797714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.686 [2024-11-26 18:26:11.914906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:19.253 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:19.512 1+0 records in 00:22:19.512 1+0 records out 00:22:19.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406697 s, 10.1 MB/s 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:19.512 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:22:19.771 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:22:19.771 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:22:19.771 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:19.772 1+0 records in 00:22:19.772 1+0 records out 00:22:19.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479171 s, 8.5 MB/s 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:19.772 18:26:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:20.031 1+0 records in 00:22:20.031 1+0 records out 00:22:20.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547273 s, 7.5 MB/s 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:20.031 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:20.289 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:20.289 1+0 records in 00:22:20.289 1+0 records out 00:22:20.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431991 s, 9.5 MB/s 00:22:20.290 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.290 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:20.290 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.290 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:20.290 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:20.290 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:20.290 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:20.290 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:20.548 1+0 records in 00:22:20.548 1+0 records out 00:22:20.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666388 s, 6.1 MB/s 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:20.548 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:20.807 1+0 records in 00:22:20.807 1+0 records out 00:22:20.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631016 s, 6.5 MB/s 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:20.807 18:26:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd0", 00:22:21.065 "bdev_name": "nvme0n1" 00:22:21.065 }, 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd1", 00:22:21.065 "bdev_name": "nvme1n1" 00:22:21.065 }, 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd2", 00:22:21.065 "bdev_name": "nvme1n2" 00:22:21.065 }, 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd3", 00:22:21.065 "bdev_name": "nvme1n3" 00:22:21.065 }, 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd4", 00:22:21.065 "bdev_name": "nvme2n1" 00:22:21.065 }, 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd5", 00:22:21.065 "bdev_name": "nvme3n1" 00:22:21.065 } 00:22:21.065 ]' 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd0", 00:22:21.065 "bdev_name": "nvme0n1" 00:22:21.065 }, 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd1", 00:22:21.065 "bdev_name": "nvme1n1" 00:22:21.065 }, 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd2", 00:22:21.065 "bdev_name": "nvme1n2" 00:22:21.065 }, 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd3", 00:22:21.065 "bdev_name": "nvme1n3" 00:22:21.065 }, 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd4", 00:22:21.065 "bdev_name": "nvme2n1" 00:22:21.065 }, 00:22:21.065 { 00:22:21.065 "nbd_device": "/dev/nbd5", 00:22:21.065 "bdev_name": "nvme3n1" 00:22:21.065 } 00:22:21.065 ]' 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.065 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:21.324 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.581 18:26:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:22:21.840 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:22:21.840 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:22:21.840 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:22:21.840 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.840 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.840 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:22:21.840 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:21.840 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.840 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.840 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:22:22.098 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:22:22.098 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:22:22.098 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:22:22.098 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:22.098 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:22.098 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:22:22.098 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:22.098 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:22.098 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:22.098 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:22.356 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:22.615 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:22.616 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:22.616 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:22.616 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:22.616 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:22.616 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:22:22.874 /dev/nbd0 00:22:22.874 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:22.874 18:26:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:22.874 18:26:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:22.874 18:26:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:22.874 18:26:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:22.874 18:26:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:22.874 18:26:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:22.874 1+0 records in 00:22:22.874 1+0 records out 00:22:22.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542232 s, 7.6 MB/s 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:22.874 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:22:23.133 /dev/nbd1 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.133 1+0 records in 00:22:23.133 1+0 records out 00:22:23.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676537 s, 6.1 MB/s 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:23.133 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:22:23.391 /dev/nbd10 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.391 1+0 records in 00:22:23.391 1+0 records out 00:22:23.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713567 s, 5.7 MB/s 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:23.391 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:22:23.392 /dev/nbd11 00:22:23.650 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.651 1+0 records in 00:22:23.651 1+0 records out 00:22:23.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050224 s, 8.2 MB/s 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:22:23.651 /dev/nbd12 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:23.651 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:22:23.910 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:23.910 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:23.910 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:23.910 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:23.910 1+0 records in 00:22:23.910 1+0 records out 00:22:23.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908825 s, 4.5 MB/s 00:22:23.910 18:26:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:22:23.910 /dev/nbd13 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:23.910 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:24.169 1+0 records in 00:22:24.169 1+0 records out 00:22:24.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658708 s, 6.2 MB/s 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd0", 00:22:24.169 "bdev_name": "nvme0n1" 00:22:24.169 }, 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd1", 00:22:24.169 "bdev_name": "nvme1n1" 00:22:24.169 }, 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd10", 00:22:24.169 "bdev_name": "nvme1n2" 00:22:24.169 }, 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd11", 00:22:24.169 "bdev_name": "nvme1n3" 00:22:24.169 }, 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd12", 00:22:24.169 "bdev_name": "nvme2n1" 00:22:24.169 }, 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd13", 00:22:24.169 "bdev_name": "nvme3n1" 00:22:24.169 } 00:22:24.169 ]' 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd0", 00:22:24.169 "bdev_name": "nvme0n1" 00:22:24.169 }, 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd1", 00:22:24.169 "bdev_name": "nvme1n1" 00:22:24.169 }, 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd10", 00:22:24.169 "bdev_name": "nvme1n2" 00:22:24.169 }, 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd11", 00:22:24.169 "bdev_name": "nvme1n3" 00:22:24.169 }, 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd12", 00:22:24.169 "bdev_name": "nvme2n1" 00:22:24.169 }, 00:22:24.169 { 00:22:24.169 "nbd_device": "/dev/nbd13", 00:22:24.169 "bdev_name": "nvme3n1" 00:22:24.169 } 00:22:24.169 ]' 00:22:24.169 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:24.427 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:22:24.428 /dev/nbd1 00:22:24.428 /dev/nbd10 00:22:24.428 /dev/nbd11 00:22:24.428 /dev/nbd12 00:22:24.428 /dev/nbd13' 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:22:24.428 /dev/nbd1 00:22:24.428 /dev/nbd10 00:22:24.428 /dev/nbd11 00:22:24.428 /dev/nbd12 00:22:24.428 /dev/nbd13' 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:24.428 256+0 records in 00:22:24.428 256+0 records out 00:22:24.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00572178 s, 183 MB/s 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:24.428 256+0 records in 00:22:24.428 256+0 records out 00:22:24.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.086552 s, 12.1 MB/s 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:22:24.428 256+0 records in 00:22:24.428 256+0 records out 00:22:24.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0889689 s, 11.8 MB/s 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:24.428 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:22:24.687 256+0 records in 00:22:24.687 256+0 records out 00:22:24.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0871346 s, 12.0 MB/s 00:22:24.687 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:24.687 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:22:24.687 256+0 records in 00:22:24.687 256+0 records out 00:22:24.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0868929 s, 12.1 MB/s 00:22:24.687 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:24.687 18:26:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:22:24.945 256+0 records in 00:22:24.945 256+0 records out 00:22:24.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106536 s, 9.8 MB/s 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:22:24.945 256+0 records in 00:22:24.945 256+0 records out 00:22:24.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0884683 s, 11.9 MB/s 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:24.945 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.946 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:25.204 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:25.204 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:25.204 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:25.204 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.204 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.204 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:25.204 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:25.204 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.204 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.204 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:25.463 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:25.463 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:25.463 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:25.463 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.463 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.463 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:25.463 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:25.463 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.463 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.463 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:22:25.721 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:22:25.721 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:22:25.721 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:22:25.721 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.721 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.721 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:22:25.721 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:25.721 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.721 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.721 18:26:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:22:25.721 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:25.980 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:26.239 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:26.498 18:26:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:26.757 malloc_lvol_verify 00:22:26.757 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:27.016 c03c71bb-8520-4d97-8d6c-3ba29c98f792 00:22:27.016 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:27.274 8747fef4-d362-4de7-ad7c-16936f0f8ed8 00:22:27.275 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:27.533 /dev/nbd0 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:27.533 mke2fs 1.47.0 (5-Feb-2023) 00:22:27.533 Discarding device blocks: 0/4096 done 00:22:27.533 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:27.533 00:22:27.533 Allocating group tables: 0/1 done 00:22:27.533 Writing inode tables: 0/1 done 00:22:27.533 Creating journal (1024 blocks): done 00:22:27.533 Writing superblocks and filesystem accounting information: 0/1 done 00:22:27.533 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:27.533 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:27.791 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:27.791 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:27.791 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:27.791 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:27.791 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:27.791 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74526 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74526 ']' 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74526 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74526 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:27.792 killing process with pid 74526 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74526' 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74526 00:22:27.792 18:26:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74526 00:22:28.727 18:26:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:28.727 00:22:28.727 real 0m10.530s 00:22:28.727 user 0m14.089s 00:22:28.727 sys 0m4.050s 00:22:28.727 18:26:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.727 18:26:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:28.727 ************************************ 00:22:28.727 END TEST bdev_nbd 00:22:28.727 ************************************ 00:22:28.987 18:26:22 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:22:28.987 18:26:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:22:28.987 18:26:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:22:28.987 18:26:22 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:22:28.987 18:26:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:28.987 18:26:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.987 18:26:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:28.987 ************************************ 00:22:28.987 START TEST bdev_fio 00:22:28.987 ************************************ 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:28.987 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n2]' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n2 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n3]' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n3 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:28.987 ************************************ 00:22:28.987 START TEST bdev_fio_rw_verify 00:22:28.987 ************************************ 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:28.987 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:22:28.988 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:28.988 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:28.988 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:28.988 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:22:28.988 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:28.988 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:28.988 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:28.988 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:22:28.988 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:28.988 18:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:29.246 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:29.246 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:29.246 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:29.246 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:29.246 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:29.246 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:29.246 fio-3.35 00:22:29.246 Starting 6 threads 00:22:41.473 00:22:41.473 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74928: Tue Nov 26 18:26:33 2024 00:22:41.473 read: IOPS=34.7k, BW=135MiB/s (142MB/s)(1354MiB/10001msec) 00:22:41.473 slat (usec): min=2, max=4283, avg= 9.10, stdev=10.40 00:22:41.473 clat (usec): min=84, max=4822, avg=440.24, stdev=228.21 00:22:41.473 lat (usec): min=89, max=4835, avg=449.34, stdev=230.21 00:22:41.473 clat percentiles (usec): 00:22:41.473 | 50.000th=[ 396], 99.000th=[ 1090], 99.900th=[ 1532], 99.990th=[ 3654], 00:22:41.473 | 99.999th=[ 4817] 00:22:41.473 write: IOPS=35.0k, BW=137MiB/s (143MB/s)(1369MiB/10001msec); 0 zone resets 00:22:41.473 slat (usec): min=8, max=4261, avg=35.64, stdev=47.26 00:22:41.473 clat (usec): min=73, max=6152, avg=611.27, stdev=293.76 00:22:41.473 lat (usec): min=102, max=6210, avg=646.90, stdev=303.50 00:22:41.473 clat percentiles (usec): 00:22:41.473 | 50.000th=[ 570], 99.000th=[ 1450], 99.900th=[ 1958], 99.990th=[ 3687], 00:22:41.473 | 99.999th=[ 6063] 00:22:41.473 bw ( KiB/s): min=108273, max=168760, per=99.69%, avg=139694.68, stdev=2846.00, samples=114 00:22:41.473 iops : min=27068, max=42190, avg=34923.37, stdev=711.50, samples=114 00:22:41.473 lat (usec) : 100=0.01%, 250=14.45%, 500=38.72%, 750=27.80%, 1000=13.17% 00:22:41.473 lat (msec) : 2=5.78%, 4=0.06%, 10=0.01% 00:22:41.473 cpu : usr=47.81%, sys=33.45%, ctx=9002, majf=0, minf=28608 00:22:41.473 IO depths : 1=11.7%, 2=24.0%, 4=50.9%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:41.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.473 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.473 issued rwts: total=346565,350364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.473 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:41.473 00:22:41.473 Run status group 0 (all jobs): 00:22:41.473 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=1354MiB (1420MB), run=10001-10001msec 00:22:41.473 WRITE: bw=137MiB/s (143MB/s), 137MiB/s-137MiB/s (143MB/s-143MB/s), io=1369MiB (1435MB), run=10001-10001msec 00:22:41.473 ----------------------------------------------------- 00:22:41.473 Suppressions used: 00:22:41.473 count bytes template 00:22:41.473 6 48 /usr/src/fio/parse.c 00:22:41.473 3536 339456 /usr/src/fio/iolog.c 00:22:41.473 1 8 libtcmalloc_minimal.so 00:22:41.473 1 904 libcrypto.so 00:22:41.473 ----------------------------------------------------- 00:22:41.473 00:22:41.473 00:22:41.473 real 0m12.462s 00:22:41.473 user 0m30.703s 00:22:41.473 sys 0m20.444s 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:41.473 ************************************ 00:22:41.473 END TEST bdev_fio_rw_verify 00:22:41.473 ************************************ 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:41.473 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:41.474 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:22:41.474 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:22:41.474 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:22:41.474 18:26:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:41.474 18:26:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "bf4b1a0b-6652-479c-a7a1-605a1f7b9adf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bf4b1a0b-6652-479c-a7a1-605a1f7b9adf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1deae91d-0142-401b-94d5-27fcb5f41646"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1deae91d-0142-401b-94d5-27fcb5f41646",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "77e823ac-fcb2-42a5-b8d9-58d68256fc0a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "77e823ac-fcb2-42a5-b8d9-58d68256fc0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "48fd55ea-ed19-42fc-845f-b76e6e7740f3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "48fd55ea-ed19-42fc-845f-b76e6e7740f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ac367b75-51c9-4097-a803-e4ed59b7ba65"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ac367b75-51c9-4097-a803-e4ed59b7ba65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "73d2dc39-62f0-4b0f-a113-7d25d89fe876"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "73d2dc39-62f0-4b0f-a113-7d25d89fe876",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:22:41.730 18:26:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:41.730 18:26:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:41.730 /home/vagrant/spdk_repo/spdk 00:22:41.730 18:26:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:41.730 18:26:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:41.730 18:26:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:41.730 00:22:41.730 real 0m12.686s 00:22:41.730 user 0m30.828s 00:22:41.730 sys 0m20.551s 00:22:41.730 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.730 18:26:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:41.730 ************************************ 00:22:41.730 END TEST bdev_fio 00:22:41.730 ************************************ 00:22:41.730 18:26:34 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:41.731 18:26:34 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:41.731 18:26:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:41.731 18:26:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.731 18:26:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:41.731 ************************************ 00:22:41.731 START TEST bdev_verify 00:22:41.731 ************************************ 00:22:41.731 18:26:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:41.731 [2024-11-26 18:26:34.965479] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:22:41.731 [2024-11-26 18:26:34.965608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75109 ] 00:22:41.988 [2024-11-26 18:26:35.137699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:41.988 [2024-11-26 18:26:35.248756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.988 [2024-11-26 18:26:35.248792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.551 Running I/O for 5 seconds... 00:22:44.855 25376.00 IOPS, 99.12 MiB/s [2024-11-26T18:26:39.125Z] 25040.00 IOPS, 97.81 MiB/s [2024-11-26T18:26:40.058Z] 24853.33 IOPS, 97.08 MiB/s [2024-11-26T18:26:40.994Z] 24952.00 IOPS, 97.47 MiB/s [2024-11-26T18:26:40.994Z] 24691.20 IOPS, 96.45 MiB/s 00:22:47.659 Latency(us) 00:22:47.659 [2024-11-26T18:26:40.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.659 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:47.659 Verification LBA range: start 0x0 length 0x20000 00:22:47.659 nvme0n1 : 5.06 1949.42 7.61 0.00 0.00 65551.49 9386.82 63647.19 00:22:47.659 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:47.659 Verification LBA range: start 0x20000 length 0x20000 00:22:47.659 nvme0n1 : 5.03 1983.90 7.75 0.00 0.00 64408.80 7097.35 60441.94 00:22:47.659 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:47.660 Verification LBA range: start 0x0 length 0x80000 00:22:47.660 nvme1n1 : 5.03 1933.07 7.55 0.00 0.00 66001.07 8642.74 56778.79 00:22:47.660 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:47.660 Verification LBA range: start 0x80000 length 0x80000 00:22:47.660 nvme1n1 : 5.07 1969.84 7.69 0.00 0.00 64768.13 8070.37 63189.30 00:22:47.660 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:47.660 Verification LBA range: start 0x0 length 0x80000 00:22:47.660 nvme1n2 : 5.05 1927.93 7.53 0.00 0.00 66077.16 6811.17 66394.55 00:22:47.660 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:47.660 Verification LBA range: start 0x80000 length 0x80000 00:22:47.660 nvme1n2 : 5.07 1968.73 7.69 0.00 0.00 64702.29 6524.98 64105.08 00:22:47.660 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:47.660 Verification LBA range: start 0x0 length 0x80000 00:22:47.660 nvme1n3 : 5.06 1923.08 7.51 0.00 0.00 66147.24 8928.92 66852.44 00:22:47.660 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:47.660 Verification LBA range: start 0x80000 length 0x80000 00:22:47.660 nvme1n3 : 5.07 1967.61 7.69 0.00 0.00 64640.50 3205.25 65020.87 00:22:47.660 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:47.660 Verification LBA range: start 0x0 length 0xbd0bd 00:22:47.660 nvme2n1 : 5.07 2760.23 10.78 0.00 0.00 45997.65 5780.90 54489.32 00:22:47.660 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:47.660 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:22:47.660 nvme2n1 : 5.06 2636.04 10.30 0.00 0.00 48129.16 5151.30 57236.68 00:22:47.660 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:47.660 Verification LBA range: start 0x0 length 0xa0000 00:22:47.660 nvme3n1 : 5.05 1876.19 7.33 0.00 0.00 67621.40 4550.32 66852.44 00:22:47.660 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:47.660 Verification LBA range: start 0xa0000 length 0xa0000 00:22:47.660 nvme3n1 : 5.06 1617.64 6.32 0.00 0.00 78241.81 5609.19 79215.57 00:22:47.660 [2024-11-26T18:26:40.995Z] =================================================================================================================== 00:22:47.660 [2024-11-26T18:26:40.995Z] Total : 24513.67 95.76 0.00 0.00 62293.55 3205.25 79215.57 00:22:48.622 00:22:48.622 real 0m7.052s 00:22:48.622 user 0m11.115s 00:22:48.622 sys 0m1.838s 00:22:48.622 18:26:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.622 18:26:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:48.622 ************************************ 00:22:48.622 END TEST bdev_verify 00:22:48.622 ************************************ 00:22:48.880 18:26:41 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:48.880 18:26:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:48.880 18:26:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.880 18:26:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:48.881 ************************************ 00:22:48.881 START TEST bdev_verify_big_io 00:22:48.881 ************************************ 00:22:48.881 18:26:42 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:48.881 [2024-11-26 18:26:42.089533] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:22:48.881 [2024-11-26 18:26:42.090039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75202 ] 00:22:49.140 [2024-11-26 18:26:42.262506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:49.140 [2024-11-26 18:26:42.374326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.140 [2024-11-26 18:26:42.374360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.708 Running I/O for 5 seconds... 00:22:55.313 3220.00 IOPS, 201.25 MiB/s [2024-11-26T18:26:48.648Z] 4258.00 IOPS, 266.12 MiB/s 00:22:55.313 Latency(us) 00:22:55.313 [2024-11-26T18:26:48.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.313 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0x0 length 0x2000 00:22:55.313 nvme0n1 : 5.69 177.02 11.06 0.00 0.00 700786.73 54947.21 1018355.03 00:22:55.313 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0x2000 length 0x2000 00:22:55.313 nvme0n1 : 5.67 149.45 9.34 0.00 0.00 843342.27 42126.20 952418.38 00:22:55.313 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0x0 length 0x8000 00:22:55.313 nvme1n1 : 5.63 156.39 9.77 0.00 0.00 769498.79 93410.26 1362690.91 00:22:55.313 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0x8000 length 0x8000 00:22:55.313 nvme1n1 : 5.66 128.52 8.03 0.00 0.00 950862.61 113557.58 1091617.98 00:22:55.313 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0x0 length 0x8000 00:22:55.313 nvme1n2 : 5.65 152.82 9.55 0.00 0.00 776093.09 47391.97 1296754.25 00:22:55.313 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0x8000 length 0x8000 00:22:55.313 nvme1n2 : 5.67 110.13 6.88 0.00 0.00 1081506.17 84710.29 2637466.27 00:22:55.313 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0x0 length 0x8000 00:22:55.313 nvme1n3 : 5.66 169.62 10.60 0.00 0.00 691956.90 26557.82 1113596.87 00:22:55.313 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0x8000 length 0x8000 00:22:55.313 nvme1n3 : 5.67 129.86 8.12 0.00 0.00 895241.42 108062.85 1384669.79 00:22:55.313 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0x0 length 0xbd0b 00:22:55.313 nvme2n1 : 5.69 258.57 16.16 0.00 0.00 442871.74 12191.41 556798.43 00:22:55.313 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0xbd0b length 0xbd0b 00:22:55.313 nvme2n1 : 5.68 160.68 10.04 0.00 0.00 713102.85 6782.55 904797.46 00:22:55.313 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0x0 length 0xa000 00:22:55.313 nvme3n1 : 5.70 233.63 14.60 0.00 0.00 477168.71 2132.07 901134.31 00:22:55.313 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:55.313 Verification LBA range: start 0xa000 length 0xa000 00:22:55.313 nvme3n1 : 5.68 153.45 9.59 0.00 0.00 732666.97 3319.73 1296754.25 00:22:55.313 [2024-11-26T18:26:48.648Z] =================================================================================================================== 00:22:55.314 [2024-11-26T18:26:48.649Z] Total : 1980.13 123.76 0.00 0.00 715223.04 2132.07 2637466.27 00:22:56.688 00:22:56.688 real 0m8.015s 00:22:56.688 user 0m14.563s 00:22:56.688 sys 0m0.533s 00:22:56.688 18:26:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.688 18:26:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:56.688 ************************************ 00:22:56.688 END TEST bdev_verify_big_io 00:22:56.688 ************************************ 00:22:56.947 18:26:50 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:56.947 18:26:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:56.947 18:26:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.947 18:26:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:56.947 ************************************ 00:22:56.947 START TEST bdev_write_zeroes 00:22:56.947 ************************************ 00:22:56.947 18:26:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:56.947 [2024-11-26 18:26:50.192585] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:22:56.947 [2024-11-26 18:26:50.192716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75316 ] 00:22:57.205 [2024-11-26 18:26:50.370893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.205 [2024-11-26 18:26:50.484565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.771 Running I/O for 1 seconds... 00:22:58.708 55840.00 IOPS, 218.12 MiB/s 00:22:58.708 Latency(us) 00:22:58.708 [2024-11-26T18:26:52.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.708 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:58.708 nvme0n1 : 1.04 8712.02 34.03 0.00 0.00 14679.47 8699.98 32968.33 00:22:58.708 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:58.708 nvme1n1 : 1.04 8700.64 33.99 0.00 0.00 14689.29 8699.98 37089.37 00:22:58.708 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:58.708 nvme1n2 : 1.05 8689.04 33.94 0.00 0.00 14700.19 8528.27 40523.57 00:22:58.708 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:58.708 nvme1n3 : 1.05 8678.34 33.90 0.00 0.00 14709.58 8413.79 43270.93 00:22:58.708 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:58.708 nvme2n1 : 1.05 11235.05 43.89 0.00 0.00 11297.50 3691.77 39836.73 00:22:58.708 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:58.708 nvme3n1 : 1.04 8724.15 34.08 0.00 0.00 14533.80 4807.88 39836.73 00:22:58.708 [2024-11-26T18:26:52.043Z] =================================================================================================================== 00:22:58.708 [2024-11-26T18:26:52.043Z] Total : 54739.25 213.83 0.00 0.00 13969.90 3691.77 43270.93 00:23:00.086 00:23:00.086 real 0m3.023s 00:23:00.086 user 0m2.313s 00:23:00.086 sys 0m0.534s 00:23:00.086 18:26:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.086 18:26:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:00.086 ************************************ 00:23:00.086 END TEST bdev_write_zeroes 00:23:00.086 ************************************ 00:23:00.086 18:26:53 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:00.086 18:26:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:00.086 18:26:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.086 18:26:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:00.086 ************************************ 00:23:00.086 START TEST bdev_json_nonenclosed 00:23:00.086 ************************************ 00:23:00.086 18:26:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:00.086 [2024-11-26 18:26:53.285375] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:23:00.086 [2024-11-26 18:26:53.285503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75379 ] 00:23:00.345 [2024-11-26 18:26:53.465457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.345 [2024-11-26 18:26:53.583789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.345 [2024-11-26 18:26:53.583886] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:00.345 [2024-11-26 18:26:53.583905] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:00.345 [2024-11-26 18:26:53.583915] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:00.605 00:23:00.605 real 0m0.651s 00:23:00.605 user 0m0.405s 00:23:00.605 sys 0m0.138s 00:23:00.605 18:26:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.605 18:26:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:00.605 ************************************ 00:23:00.605 END TEST bdev_json_nonenclosed 00:23:00.605 ************************************ 00:23:00.605 18:26:53 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:00.605 18:26:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:00.605 18:26:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.605 18:26:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:00.605 ************************************ 00:23:00.605 START TEST bdev_json_nonarray 00:23:00.605 ************************************ 00:23:00.605 18:26:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:00.864 [2024-11-26 18:26:53.996505] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:23:00.864 [2024-11-26 18:26:53.996613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75409 ] 00:23:00.864 [2024-11-26 18:26:54.172907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.123 [2024-11-26 18:26:54.283653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.123 [2024-11-26 18:26:54.283756] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:01.123 [2024-11-26 18:26:54.283774] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:01.123 [2024-11-26 18:26:54.283784] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:01.383 00:23:01.383 real 0m0.630s 00:23:01.383 user 0m0.399s 00:23:01.383 sys 0m0.126s 00:23:01.383 18:26:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.383 18:26:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:01.383 ************************************ 00:23:01.383 END TEST bdev_json_nonarray 00:23:01.383 ************************************ 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:23:01.383 18:26:54 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:02.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:20.422 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:20.422 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:20.422 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:35.297 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:35.297 ************************************ 00:23:35.297 END TEST blockdev_xnvme 00:23:35.297 ************************************ 00:23:35.297 00:23:35.297 real 1m25.220s 00:23:35.297 user 1m30.861s 00:23:35.297 sys 2m11.262s 00:23:35.297 18:27:26 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:35.297 18:27:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:35.297 18:27:26 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:35.297 18:27:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:35.297 18:27:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.297 18:27:26 -- common/autotest_common.sh@10 -- # set +x 00:23:35.297 ************************************ 00:23:35.297 START TEST ublk 00:23:35.297 ************************************ 00:23:35.297 18:27:26 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:23:35.297 * Looking for test storage... 00:23:35.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:23:35.297 18:27:26 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:35.297 18:27:26 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:23:35.297 18:27:26 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:35.297 18:27:26 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:35.297 18:27:26 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:35.297 18:27:26 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:35.297 18:27:26 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:35.297 18:27:26 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:23:35.297 18:27:26 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:23:35.297 18:27:26 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:23:35.297 18:27:26 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:23:35.297 18:27:26 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:23:35.297 18:27:26 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:23:35.297 18:27:26 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:23:35.297 18:27:26 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:35.297 18:27:26 ublk -- scripts/common.sh@344 -- # case "$op" in 00:23:35.297 18:27:26 ublk -- scripts/common.sh@345 -- # : 1 00:23:35.297 18:27:26 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:35.297 18:27:26 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.297 18:27:26 ublk -- scripts/common.sh@365 -- # decimal 1 00:23:35.297 18:27:26 ublk -- scripts/common.sh@353 -- # local d=1 00:23:35.297 18:27:26 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:35.297 18:27:26 ublk -- scripts/common.sh@355 -- # echo 1 00:23:35.297 18:27:26 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:23:35.297 18:27:26 ublk -- scripts/common.sh@366 -- # decimal 2 00:23:35.297 18:27:26 ublk -- scripts/common.sh@353 -- # local d=2 00:23:35.297 18:27:26 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:35.297 18:27:26 ublk -- scripts/common.sh@355 -- # echo 2 00:23:35.297 18:27:26 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:23:35.297 18:27:26 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:35.297 18:27:26 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:35.297 18:27:26 ublk -- scripts/common.sh@368 -- # return 0 00:23:35.297 18:27:26 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:35.297 18:27:26 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.297 --rc genhtml_branch_coverage=1 00:23:35.297 --rc genhtml_function_coverage=1 00:23:35.297 --rc genhtml_legend=1 00:23:35.297 --rc geninfo_all_blocks=1 00:23:35.297 --rc geninfo_unexecuted_blocks=1 00:23:35.297 00:23:35.297 ' 00:23:35.297 18:27:26 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.297 --rc genhtml_branch_coverage=1 00:23:35.297 --rc genhtml_function_coverage=1 00:23:35.297 --rc genhtml_legend=1 00:23:35.297 --rc geninfo_all_blocks=1 00:23:35.297 --rc geninfo_unexecuted_blocks=1 00:23:35.297 00:23:35.297 ' 00:23:35.297 18:27:26 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.297 --rc genhtml_branch_coverage=1 00:23:35.297 --rc genhtml_function_coverage=1 00:23:35.297 --rc genhtml_legend=1 00:23:35.297 --rc geninfo_all_blocks=1 00:23:35.297 --rc geninfo_unexecuted_blocks=1 00:23:35.297 00:23:35.297 ' 00:23:35.297 18:27:26 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:35.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.298 --rc genhtml_branch_coverage=1 00:23:35.298 --rc genhtml_function_coverage=1 00:23:35.298 --rc genhtml_legend=1 00:23:35.298 --rc geninfo_all_blocks=1 00:23:35.298 --rc geninfo_unexecuted_blocks=1 00:23:35.298 00:23:35.298 ' 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:23:35.298 18:27:26 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:23:35.298 18:27:26 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:23:35.298 18:27:26 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:23:35.298 18:27:26 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:23:35.298 18:27:26 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:23:35.298 18:27:26 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:23:35.298 18:27:26 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:23:35.298 18:27:26 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:23:35.298 18:27:26 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:23:35.298 18:27:26 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:35.298 18:27:26 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.298 18:27:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:35.298 ************************************ 00:23:35.298 START TEST test_save_ublk_config 00:23:35.298 ************************************ 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75902 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75902 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75902 ']' 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.298 18:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:35.298 [2024-11-26 18:27:27.019999] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:23:35.298 [2024-11-26 18:27:27.020140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75902 ] 00:23:35.298 [2024-11-26 18:27:27.196819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.298 [2024-11-26 18:27:27.310191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:35.298 [2024-11-26 18:27:28.208647] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:35.298 [2024-11-26 18:27:28.209899] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:35.298 malloc0 00:23:35.298 [2024-11-26 18:27:28.288798] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:35.298 [2024-11-26 18:27:28.288909] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:35.298 [2024-11-26 18:27:28.288924] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:35.298 [2024-11-26 18:27:28.288932] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:35.298 [2024-11-26 18:27:28.297745] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:35.298 [2024-11-26 18:27:28.297770] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:35.298 [2024-11-26 18:27:28.304665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:35.298 [2024-11-26 18:27:28.304766] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:35.298 [2024-11-26 18:27:28.321653] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:35.298 0 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.298 18:27:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:23:35.298 "subsystems": [ 00:23:35.298 { 00:23:35.298 "subsystem": "fsdev", 00:23:35.298 "config": [ 00:23:35.298 { 00:23:35.298 "method": "fsdev_set_opts", 00:23:35.298 "params": { 00:23:35.298 "fsdev_io_pool_size": 65535, 00:23:35.298 "fsdev_io_cache_size": 256 00:23:35.298 } 00:23:35.298 } 00:23:35.298 ] 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "subsystem": "keyring", 00:23:35.298 "config": [] 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "subsystem": "iobuf", 00:23:35.298 "config": [ 00:23:35.298 { 00:23:35.298 "method": "iobuf_set_options", 00:23:35.298 "params": { 00:23:35.298 "small_pool_count": 8192, 00:23:35.298 "large_pool_count": 1024, 00:23:35.298 "small_bufsize": 8192, 00:23:35.298 "large_bufsize": 135168, 00:23:35.298 "enable_numa": false 00:23:35.298 } 00:23:35.298 } 00:23:35.298 ] 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "subsystem": "sock", 00:23:35.298 "config": [ 00:23:35.298 { 00:23:35.298 "method": "sock_set_default_impl", 00:23:35.298 "params": { 00:23:35.298 "impl_name": "posix" 00:23:35.298 } 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "method": "sock_impl_set_options", 00:23:35.298 "params": { 00:23:35.298 "impl_name": "ssl", 00:23:35.298 "recv_buf_size": 4096, 00:23:35.298 "send_buf_size": 4096, 00:23:35.298 "enable_recv_pipe": true, 00:23:35.298 "enable_quickack": false, 00:23:35.298 "enable_placement_id": 0, 00:23:35.298 "enable_zerocopy_send_server": true, 00:23:35.298 "enable_zerocopy_send_client": false, 00:23:35.298 "zerocopy_threshold": 0, 00:23:35.298 "tls_version": 0, 00:23:35.298 "enable_ktls": false 00:23:35.298 } 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "method": "sock_impl_set_options", 00:23:35.298 "params": { 00:23:35.298 "impl_name": "posix", 00:23:35.298 "recv_buf_size": 2097152, 00:23:35.298 "send_buf_size": 2097152, 00:23:35.298 "enable_recv_pipe": true, 00:23:35.298 "enable_quickack": false, 00:23:35.298 "enable_placement_id": 0, 00:23:35.298 "enable_zerocopy_send_server": true, 00:23:35.298 "enable_zerocopy_send_client": false, 00:23:35.298 "zerocopy_threshold": 0, 00:23:35.298 "tls_version": 0, 00:23:35.298 "enable_ktls": false 00:23:35.298 } 00:23:35.298 } 00:23:35.298 ] 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "subsystem": "vmd", 00:23:35.298 "config": [] 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "subsystem": "accel", 00:23:35.298 "config": [ 00:23:35.298 { 00:23:35.298 "method": "accel_set_options", 00:23:35.298 "params": { 00:23:35.298 "small_cache_size": 128, 00:23:35.298 "large_cache_size": 16, 00:23:35.298 "task_count": 2048, 00:23:35.298 "sequence_count": 2048, 00:23:35.298 "buf_count": 2048 00:23:35.298 } 00:23:35.298 } 00:23:35.298 ] 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "subsystem": "bdev", 00:23:35.298 "config": [ 00:23:35.298 { 00:23:35.298 "method": "bdev_set_options", 00:23:35.298 "params": { 00:23:35.298 "bdev_io_pool_size": 65535, 00:23:35.298 "bdev_io_cache_size": 256, 00:23:35.298 "bdev_auto_examine": true, 00:23:35.298 "iobuf_small_cache_size": 128, 00:23:35.298 "iobuf_large_cache_size": 16 00:23:35.298 } 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "method": "bdev_raid_set_options", 00:23:35.298 "params": { 00:23:35.298 "process_window_size_kb": 1024, 00:23:35.298 "process_max_bandwidth_mb_sec": 0 00:23:35.298 } 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "method": "bdev_iscsi_set_options", 00:23:35.298 "params": { 00:23:35.298 "timeout_sec": 30 00:23:35.298 } 00:23:35.298 }, 00:23:35.298 { 00:23:35.298 "method": "bdev_nvme_set_options", 00:23:35.298 "params": { 00:23:35.298 "action_on_timeout": "none", 00:23:35.298 "timeout_us": 0, 00:23:35.298 "timeout_admin_us": 0, 00:23:35.298 "keep_alive_timeout_ms": 10000, 00:23:35.298 "arbitration_burst": 0, 00:23:35.298 "low_priority_weight": 0, 00:23:35.298 "medium_priority_weight": 0, 00:23:35.298 "high_priority_weight": 0, 00:23:35.298 "nvme_adminq_poll_period_us": 10000, 00:23:35.299 "nvme_ioq_poll_period_us": 0, 00:23:35.299 "io_queue_requests": 0, 00:23:35.299 "delay_cmd_submit": true, 00:23:35.299 "transport_retry_count": 4, 00:23:35.299 "bdev_retry_count": 3, 00:23:35.299 "transport_ack_timeout": 0, 00:23:35.299 "ctrlr_loss_timeout_sec": 0, 00:23:35.299 "reconnect_delay_sec": 0, 00:23:35.299 "fast_io_fail_timeout_sec": 0, 00:23:35.299 "disable_auto_failback": false, 00:23:35.299 "generate_uuids": false, 00:23:35.299 "transport_tos": 0, 00:23:35.299 "nvme_error_stat": false, 00:23:35.299 "rdma_srq_size": 0, 00:23:35.299 "io_path_stat": false, 00:23:35.299 "allow_accel_sequence": false, 00:23:35.299 "rdma_max_cq_size": 0, 00:23:35.299 "rdma_cm_event_timeout_ms": 0, 00:23:35.299 "dhchap_digests": [ 00:23:35.299 "sha256", 00:23:35.299 "sha384", 00:23:35.299 "sha512" 00:23:35.299 ], 00:23:35.299 "dhchap_dhgroups": [ 00:23:35.299 "null", 00:23:35.299 "ffdhe2048", 00:23:35.299 "ffdhe3072", 00:23:35.299 "ffdhe4096", 00:23:35.299 "ffdhe6144", 00:23:35.299 "ffdhe8192" 00:23:35.299 ] 00:23:35.299 } 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "method": "bdev_nvme_set_hotplug", 00:23:35.299 "params": { 00:23:35.299 "period_us": 100000, 00:23:35.299 "enable": false 00:23:35.299 } 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "method": "bdev_malloc_create", 00:23:35.299 "params": { 00:23:35.299 "name": "malloc0", 00:23:35.299 "num_blocks": 8192, 00:23:35.299 "block_size": 4096, 00:23:35.299 "physical_block_size": 4096, 00:23:35.299 "uuid": "44fe87e5-357f-408a-ae3e-3656fb322c87", 00:23:35.299 "optimal_io_boundary": 0, 00:23:35.299 "md_size": 0, 00:23:35.299 "dif_type": 0, 00:23:35.299 "dif_is_head_of_md": false, 00:23:35.299 "dif_pi_format": 0 00:23:35.299 } 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "method": "bdev_wait_for_examine" 00:23:35.299 } 00:23:35.299 ] 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "subsystem": "scsi", 00:23:35.299 "config": null 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "subsystem": "scheduler", 00:23:35.299 "config": [ 00:23:35.299 { 00:23:35.299 "method": "framework_set_scheduler", 00:23:35.299 "params": { 00:23:35.299 "name": "static" 00:23:35.299 } 00:23:35.299 } 00:23:35.299 ] 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "subsystem": "vhost_scsi", 00:23:35.299 "config": [] 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "subsystem": "vhost_blk", 00:23:35.299 "config": [] 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "subsystem": "ublk", 00:23:35.299 "config": [ 00:23:35.299 { 00:23:35.299 "method": "ublk_create_target", 00:23:35.299 "params": { 00:23:35.299 "cpumask": "1" 00:23:35.299 } 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "method": "ublk_start_disk", 00:23:35.299 "params": { 00:23:35.299 "bdev_name": "malloc0", 00:23:35.299 "ublk_id": 0, 00:23:35.299 "num_queues": 1, 00:23:35.299 "queue_depth": 128 00:23:35.299 } 00:23:35.299 } 00:23:35.299 ] 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "subsystem": "nbd", 00:23:35.299 "config": [] 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "subsystem": "nvmf", 00:23:35.299 "config": [ 00:23:35.299 { 00:23:35.299 "method": "nvmf_set_config", 00:23:35.299 "params": { 00:23:35.299 "discovery_filter": "match_any", 00:23:35.299 "admin_cmd_passthru": { 00:23:35.299 "identify_ctrlr": false 00:23:35.299 }, 00:23:35.299 "dhchap_digests": [ 00:23:35.299 "sha256", 00:23:35.299 "sha384", 00:23:35.299 "sha512" 00:23:35.299 ], 00:23:35.299 "dhchap_dhgroups": [ 00:23:35.299 "null", 00:23:35.299 "ffdhe2048", 00:23:35.299 "ffdhe3072", 00:23:35.299 "ffdhe4096", 00:23:35.299 "ffdhe6144", 00:23:35.299 "ffdhe8192" 00:23:35.299 ] 00:23:35.299 } 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "method": "nvmf_set_max_subsystems", 00:23:35.299 "params": { 00:23:35.299 "max_subsystems": 1024 00:23:35.299 } 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "method": "nvmf_set_crdt", 00:23:35.299 "params": { 00:23:35.299 "crdt1": 0, 00:23:35.299 "crdt2": 0, 00:23:35.299 "crdt3": 0 00:23:35.299 } 00:23:35.299 } 00:23:35.299 ] 00:23:35.299 }, 00:23:35.299 { 00:23:35.299 "subsystem": "iscsi", 00:23:35.299 "config": [ 00:23:35.299 { 00:23:35.299 "method": "iscsi_set_options", 00:23:35.299 "params": { 00:23:35.299 "node_base": "iqn.2016-06.io.spdk", 00:23:35.299 "max_sessions": 128, 00:23:35.299 "max_connections_per_session": 2, 00:23:35.299 "max_queue_depth": 64, 00:23:35.299 "default_time2wait": 2, 00:23:35.299 "default_time2retain": 20, 00:23:35.299 "first_burst_length": 8192, 00:23:35.299 "immediate_data": true, 00:23:35.299 "allow_duplicated_isid": false, 00:23:35.299 "error_recovery_level": 0, 00:23:35.299 "nop_timeout": 60, 00:23:35.299 "nop_in_interval": 30, 00:23:35.299 "disable_chap": false, 00:23:35.299 "require_chap": false, 00:23:35.299 "mutual_chap": false, 00:23:35.299 "chap_group": 0, 00:23:35.299 "max_large_datain_per_connection": 64, 00:23:35.299 "max_r2t_per_connection": 4, 00:23:35.299 "pdu_pool_size": 36864, 00:23:35.299 "immediate_data_pool_size": 16384, 00:23:35.299 "data_out_pool_size": 2048 00:23:35.299 } 00:23:35.299 } 00:23:35.299 ] 00:23:35.299 } 00:23:35.299 ] 00:23:35.299 }' 00:23:35.299 18:27:28 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75902 00:23:35.299 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75902 ']' 00:23:35.299 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75902 00:23:35.299 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:23:35.299 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.299 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75902 00:23:35.559 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:35.559 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:35.559 killing process with pid 75902 00:23:35.559 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75902' 00:23:35.559 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75902 00:23:35.559 18:27:28 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75902 00:23:36.936 [2024-11-26 18:27:30.070793] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:36.936 [2024-11-26 18:27:30.108668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:36.937 [2024-11-26 18:27:30.108815] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:36.937 [2024-11-26 18:27:30.116641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:36.937 [2024-11-26 18:27:30.116695] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:36.937 [2024-11-26 18:27:30.116707] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:36.937 [2024-11-26 18:27:30.116745] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:36.937 [2024-11-26 18:27:30.116886] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:39.479 18:27:32 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75970 00:23:39.479 18:27:32 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75970 00:23:39.479 18:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75970 ']' 00:23:39.479 18:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.479 18:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.479 18:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.479 18:27:32 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:23:39.479 18:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.479 18:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:39.479 18:27:32 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:23:39.479 "subsystems": [ 00:23:39.479 { 00:23:39.479 "subsystem": "fsdev", 00:23:39.479 "config": [ 00:23:39.479 { 00:23:39.479 "method": "fsdev_set_opts", 00:23:39.479 "params": { 00:23:39.479 "fsdev_io_pool_size": 65535, 00:23:39.479 "fsdev_io_cache_size": 256 00:23:39.479 } 00:23:39.479 } 00:23:39.479 ] 00:23:39.479 }, 00:23:39.479 { 00:23:39.479 "subsystem": "keyring", 00:23:39.479 "config": [] 00:23:39.479 }, 00:23:39.479 { 00:23:39.479 "subsystem": "iobuf", 00:23:39.479 "config": [ 00:23:39.479 { 00:23:39.479 "method": "iobuf_set_options", 00:23:39.479 "params": { 00:23:39.479 "small_pool_count": 8192, 00:23:39.479 "large_pool_count": 1024, 00:23:39.479 "small_bufsize": 8192, 00:23:39.479 "large_bufsize": 135168, 00:23:39.479 "enable_numa": false 00:23:39.479 } 00:23:39.479 } 00:23:39.479 ] 00:23:39.479 }, 00:23:39.479 { 00:23:39.479 "subsystem": "sock", 00:23:39.479 "config": [ 00:23:39.479 { 00:23:39.479 "method": "sock_set_default_impl", 00:23:39.479 "params": { 00:23:39.479 "impl_name": "posix" 00:23:39.479 } 00:23:39.479 }, 00:23:39.479 { 00:23:39.479 "method": "sock_impl_set_options", 00:23:39.479 "params": { 00:23:39.479 "impl_name": "ssl", 00:23:39.479 "recv_buf_size": 4096, 00:23:39.479 "send_buf_size": 4096, 00:23:39.479 "enable_recv_pipe": true, 00:23:39.479 "enable_quickack": false, 00:23:39.479 "enable_placement_id": 0, 00:23:39.479 "enable_zerocopy_send_server": true, 00:23:39.479 "enable_zerocopy_send_client": false, 00:23:39.479 "zerocopy_threshold": 0, 00:23:39.479 "tls_version": 0, 00:23:39.479 "enable_ktls": false 00:23:39.479 } 00:23:39.479 }, 00:23:39.479 { 00:23:39.479 "method": "sock_impl_set_options", 00:23:39.479 "params": { 00:23:39.479 "impl_name": "posix", 00:23:39.479 "recv_buf_size": 2097152, 00:23:39.479 "send_buf_size": 2097152, 00:23:39.479 "enable_recv_pipe": true, 00:23:39.479 "enable_quickack": false, 00:23:39.479 "enable_placement_id": 0, 00:23:39.479 "enable_zerocopy_send_server": true, 00:23:39.479 "enable_zerocopy_send_client": false, 00:23:39.479 "zerocopy_threshold": 0, 00:23:39.479 "tls_version": 0, 00:23:39.479 "enable_ktls": false 00:23:39.479 } 00:23:39.479 } 00:23:39.479 ] 00:23:39.479 }, 00:23:39.479 { 00:23:39.479 "subsystem": "vmd", 00:23:39.480 "config": [] 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "subsystem": "accel", 00:23:39.480 "config": [ 00:23:39.480 { 00:23:39.480 "method": "accel_set_options", 00:23:39.480 "params": { 00:23:39.480 "small_cache_size": 128, 00:23:39.480 "large_cache_size": 16, 00:23:39.480 "task_count": 2048, 00:23:39.480 "sequence_count": 2048, 00:23:39.480 "buf_count": 2048 00:23:39.480 } 00:23:39.480 } 00:23:39.480 ] 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "subsystem": "bdev", 00:23:39.480 "config": [ 00:23:39.480 { 00:23:39.480 "method": "bdev_set_options", 00:23:39.480 "params": { 00:23:39.480 "bdev_io_pool_size": 65535, 00:23:39.480 "bdev_io_cache_size": 256, 00:23:39.480 "bdev_auto_examine": true, 00:23:39.480 "iobuf_small_cache_size": 128, 00:23:39.480 "iobuf_large_cache_size": 16 00:23:39.480 } 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "method": "bdev_raid_set_options", 00:23:39.480 "params": { 00:23:39.480 "process_window_size_kb": 1024, 00:23:39.480 "process_max_bandwidth_mb_sec": 0 00:23:39.480 } 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "method": "bdev_iscsi_set_options", 00:23:39.480 "params": { 00:23:39.480 "timeout_sec": 30 00:23:39.480 } 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "method": "bdev_nvme_set_options", 00:23:39.480 "params": { 00:23:39.480 "action_on_timeout": "none", 00:23:39.480 "timeout_us": 0, 00:23:39.480 "timeout_admin_us": 0, 00:23:39.480 "keep_alive_timeout_ms": 10000, 00:23:39.480 "arbitration_burst": 0, 00:23:39.480 "low_priority_weight": 0, 00:23:39.480 "medium_priority_weight": 0, 00:23:39.480 "high_priority_weight": 0, 00:23:39.480 "nvme_adminq_poll_period_us": 10000, 00:23:39.480 "nvme_ioq_poll_period_us": 0, 00:23:39.480 "io_queue_requests": 0, 00:23:39.480 "delay_cmd_submit": true, 00:23:39.480 "transport_retry_count": 4, 00:23:39.480 "bdev_retry_count": 3, 00:23:39.480 "transport_ack_timeout": 0, 00:23:39.480 "ctrlr_loss_timeout_sec": 0, 00:23:39.480 "reconnect_delay_sec": 0, 00:23:39.480 "fast_io_fail_timeout_sec": 0, 00:23:39.480 "disable_auto_failback": false, 00:23:39.480 "generate_uuids": false, 00:23:39.480 "transport_tos": 0, 00:23:39.480 "nvme_error_stat": false, 00:23:39.480 "rdma_srq_size": 0, 00:23:39.480 "io_path_stat": false, 00:23:39.480 "allow_accel_sequence": false, 00:23:39.480 "rdma_max_cq_size": 0, 00:23:39.480 "rdma_cm_event_timeout_ms": 0, 00:23:39.480 "dhchap_digests": [ 00:23:39.480 "sha256", 00:23:39.480 "sha384", 00:23:39.480 "sha512" 00:23:39.480 ], 00:23:39.480 "dhchap_dhgroups": [ 00:23:39.480 "null", 00:23:39.480 "ffdhe2048", 00:23:39.480 "ffdhe3072", 00:23:39.480 "ffdhe4096", 00:23:39.480 "ffdhe6144", 00:23:39.480 "ffdhe8192" 00:23:39.480 ] 00:23:39.480 } 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "method": "bdev_nvme_set_hotplug", 00:23:39.480 "params": { 00:23:39.480 "period_us": 100000, 00:23:39.480 "enable": false 00:23:39.480 } 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "method": "bdev_malloc_create", 00:23:39.480 "params": { 00:23:39.480 "name": "malloc0", 00:23:39.480 "num_blocks": 8192, 00:23:39.480 "block_size": 4096, 00:23:39.480 "physical_block_size": 4096, 00:23:39.480 "uuid": "44fe87e5-357f-408a-ae3e-3656fb322c87", 00:23:39.480 "optimal_io_boundary": 0, 00:23:39.480 "md_size": 0, 00:23:39.480 "dif_type": 0, 00:23:39.480 "dif_is_head_of_md": false, 00:23:39.480 "dif_pi_format": 0 00:23:39.480 } 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "method": "bdev_wait_for_examine" 00:23:39.480 } 00:23:39.480 ] 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "subsystem": "scsi", 00:23:39.480 "config": null 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "subsystem": "scheduler", 00:23:39.480 "config": [ 00:23:39.480 { 00:23:39.480 "method": "framework_set_scheduler", 00:23:39.480 "params": { 00:23:39.480 "name": "static" 00:23:39.480 } 00:23:39.480 } 00:23:39.480 ] 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "subsystem": "vhost_scsi", 00:23:39.480 "config": [] 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "subsystem": "vhost_blk", 00:23:39.480 "config": [] 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "subsystem": "ublk", 00:23:39.480 "config": [ 00:23:39.480 { 00:23:39.480 "method": "ublk_create_target", 00:23:39.480 "params": { 00:23:39.480 "cpumask": "1" 00:23:39.480 } 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "method": "ublk_start_disk", 00:23:39.480 "params": { 00:23:39.480 "bdev_name": "malloc0", 00:23:39.480 "ublk_id": 0, 00:23:39.480 "num_queues": 1, 00:23:39.480 "queue_depth": 128 00:23:39.480 } 00:23:39.480 } 00:23:39.480 ] 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "subsystem": "nbd", 00:23:39.480 "config": [] 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "subsystem": "nvmf", 00:23:39.480 "config": [ 00:23:39.480 { 00:23:39.480 "method": "nvmf_set_config", 00:23:39.480 "params": { 00:23:39.480 "discovery_filter": "match_any", 00:23:39.480 "admin_cmd_passthru": { 00:23:39.480 "identify_ctrlr": false 00:23:39.480 }, 00:23:39.480 "dhchap_digests": [ 00:23:39.480 "sha256", 00:23:39.480 "sha384", 00:23:39.480 "sha512" 00:23:39.480 ], 00:23:39.480 "dhchap_dhgroups": [ 00:23:39.480 "null", 00:23:39.480 "ffdhe2048", 00:23:39.480 "ffdhe3072", 00:23:39.480 "ffdhe4096", 00:23:39.480 "ffdhe6144", 00:23:39.480 "ffdhe8192" 00:23:39.480 ] 00:23:39.480 } 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "method": "nvmf_set_max_subsystems", 00:23:39.480 "params": { 00:23:39.480 "max_subsystems": 1024 00:23:39.480 } 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "method": "nvmf_set_crdt", 00:23:39.480 "params": { 00:23:39.480 "crdt1": 0, 00:23:39.480 "crdt2": 0, 00:23:39.480 "crdt3": 0 00:23:39.480 } 00:23:39.480 } 00:23:39.480 ] 00:23:39.480 }, 00:23:39.480 { 00:23:39.480 "subsystem": "iscsi", 00:23:39.480 "config": [ 00:23:39.480 { 00:23:39.480 "method": "iscsi_set_options", 00:23:39.480 "params": { 00:23:39.481 "node_base": "iqn.2016-06.io.spdk", 00:23:39.481 "max_sessions": 128, 00:23:39.481 "max_connections_per_session": 2, 00:23:39.481 "max_queue_depth": 64, 00:23:39.481 "default_time2wait": 2, 00:23:39.481 "default_time2retain": 20, 00:23:39.481 "first_burst_length": 8192, 00:23:39.481 "immediate_data": true, 00:23:39.481 "allow_duplicated_isid": false, 00:23:39.481 "error_recovery_level": 0, 00:23:39.481 "nop_timeout": 60, 00:23:39.481 "nop_in_interval": 30, 00:23:39.481 "disable_chap": false, 00:23:39.481 "require_chap": false, 00:23:39.481 "mutual_chap": false, 00:23:39.481 "chap_group": 0, 00:23:39.481 "max_large_datain_per_connection": 64, 00:23:39.481 "max_r2t_per_connection": 4, 00:23:39.481 "pdu_pool_size": 36864, 00:23:39.481 "immediate_data_pool_size": 16384, 00:23:39.481 "data_out_pool_size": 2048 00:23:39.481 } 00:23:39.481 } 00:23:39.481 ] 00:23:39.481 } 00:23:39.481 ] 00:23:39.481 }' 00:23:39.481 [2024-11-26 18:27:32.329141] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:23:39.481 [2024-11-26 18:27:32.329273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75970 ] 00:23:39.481 [2024-11-26 18:27:32.510106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.481 [2024-11-26 18:27:32.624182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.420 [2024-11-26 18:27:33.665652] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:40.420 [2024-11-26 18:27:33.666802] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:40.420 [2024-11-26 18:27:33.673799] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:23:40.420 [2024-11-26 18:27:33.673885] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:23:40.420 [2024-11-26 18:27:33.673898] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:40.420 [2024-11-26 18:27:33.673907] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:40.420 [2024-11-26 18:27:33.683594] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:40.420 [2024-11-26 18:27:33.683635] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:40.420 [2024-11-26 18:27:33.690673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:40.420 [2024-11-26 18:27:33.690769] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:40.421 [2024-11-26 18:27:33.707665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75970 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75970 ']' 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75970 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75970 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75970' 00:23:40.680 killing process with pid 75970 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75970 00:23:40.680 18:27:33 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75970 00:23:42.090 [2024-11-26 18:27:35.367053] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:42.091 [2024-11-26 18:27:35.399738] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:42.091 [2024-11-26 18:27:35.399879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:42.091 [2024-11-26 18:27:35.407689] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:42.091 [2024-11-26 18:27:35.407743] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:42.091 [2024-11-26 18:27:35.407750] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:42.091 [2024-11-26 18:27:35.407771] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:42.091 [2024-11-26 18:27:35.407903] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:43.998 18:27:37 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:23:43.998 00:23:43.998 real 0m10.354s 00:23:43.998 user 0m7.764s 00:23:43.998 sys 0m3.349s 00:23:43.998 ************************************ 00:23:43.998 END TEST test_save_ublk_config 00:23:43.998 ************************************ 00:23:43.998 18:27:37 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.998 18:27:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:23:43.998 18:27:37 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76080 00:23:43.998 18:27:37 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:23:43.998 18:27:37 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:43.998 18:27:37 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76080 00:23:43.998 18:27:37 ublk -- common/autotest_common.sh@835 -- # '[' -z 76080 ']' 00:23:43.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.998 18:27:37 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.998 18:27:37 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.998 18:27:37 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.998 18:27:37 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.998 18:27:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:44.257 [2024-11-26 18:27:37.412949] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:23:44.257 [2024-11-26 18:27:37.413799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76080 ] 00:23:44.521 [2024-11-26 18:27:37.611278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:44.521 [2024-11-26 18:27:37.728864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.521 [2024-11-26 18:27:37.728907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.460 18:27:38 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.460 18:27:38 ublk -- common/autotest_common.sh@868 -- # return 0 00:23:45.460 18:27:38 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:23:45.460 18:27:38 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:45.460 18:27:38 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.460 18:27:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:45.460 ************************************ 00:23:45.460 START TEST test_create_ublk 00:23:45.460 ************************************ 00:23:45.460 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:23:45.460 18:27:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:23:45.461 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.461 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:45.461 [2024-11-26 18:27:38.644667] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:45.461 [2024-11-26 18:27:38.647411] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:45.461 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.461 18:27:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:23:45.461 18:27:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:23:45.461 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.461 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:45.721 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.721 18:27:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:23:45.721 18:27:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:23:45.721 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.721 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:45.721 [2024-11-26 18:27:38.939797] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:23:45.721 [2024-11-26 18:27:38.940210] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:23:45.721 [2024-11-26 18:27:38.940230] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:45.721 [2024-11-26 18:27:38.940238] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:45.721 [2024-11-26 18:27:38.947727] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:45.721 [2024-11-26 18:27:38.947755] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:45.721 [2024-11-26 18:27:38.955689] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:45.721 [2024-11-26 18:27:38.956321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:45.721 [2024-11-26 18:27:38.970710] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:45.721 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.721 18:27:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:23:45.721 18:27:38 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:23:45.721 18:27:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:23:45.721 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.721 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:45.721 18:27:38 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.721 18:27:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:23:45.721 { 00:23:45.721 "ublk_device": "/dev/ublkb0", 00:23:45.721 "id": 0, 00:23:45.721 "queue_depth": 512, 00:23:45.721 "num_queues": 4, 00:23:45.721 "bdev_name": "Malloc0" 00:23:45.721 } 00:23:45.721 ]' 00:23:45.721 18:27:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:23:45.721 18:27:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:45.721 18:27:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:23:45.981 18:27:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:23:45.981 18:27:39 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:23:45.981 18:27:39 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:23:45.981 18:27:39 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:23:45.981 18:27:39 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:23:45.981 18:27:39 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:23:45.981 18:27:39 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:23:45.981 18:27:39 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:23:45.981 18:27:39 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:23:46.242 fio: verification read phase will never start because write phase uses all of runtime 00:23:46.242 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:23:46.242 fio-3.35 00:23:46.242 Starting 1 process 00:23:56.232 00:23:56.232 fio_test: (groupid=0, jobs=1): err= 0: pid=76133: Tue Nov 26 18:27:49 2024 00:23:56.232 write: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(541MiB/10001msec); 0 zone resets 00:23:56.232 clat (usec): min=49, max=7593, avg=71.31, stdev=138.54 00:23:56.232 lat (usec): min=49, max=7656, avg=71.82, stdev=138.58 00:23:56.232 clat percentiles (usec): 00:23:56.232 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 59], 20.00th=[ 61], 00:23:56.232 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:23:56.232 | 70.00th=[ 66], 80.00th=[ 69], 90.00th=[ 73], 95.00th=[ 78], 00:23:56.232 | 99.00th=[ 92], 99.50th=[ 105], 99.90th=[ 2999], 99.95th=[ 3490], 00:23:56.232 | 99.99th=[ 4080] 00:23:56.232 bw ( KiB/s): min=20328, max=58752, per=100.00%, avg=55445.89, stdev=8541.74, samples=19 00:23:56.232 iops : min= 5082, max=14688, avg=13861.47, stdev=2135.44, samples=19 00:23:56.232 lat (usec) : 50=0.01%, 100=99.41%, 250=0.28%, 500=0.02%, 750=0.02% 00:23:56.232 lat (usec) : 1000=0.01% 00:23:56.232 lat (msec) : 2=0.08%, 4=0.16%, 10=0.01% 00:23:56.232 cpu : usr=2.09%, sys=9.21%, ctx=138510, majf=0, minf=796 00:23:56.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:56.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.232 issued rwts: total=0,138510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:56.232 00:23:56.232 Run status group 0 (all jobs): 00:23:56.233 WRITE: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=541MiB (567MB), run=10001-10001msec 00:23:56.233 00:23:56.233 Disk stats (read/write): 00:23:56.233 ublkb0: ios=0/136998, merge=0/0, ticks=0/8678, in_queue=8678, util=99.14% 00:23:56.233 18:27:49 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:56.233 [2024-11-26 18:27:49.480564] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:56.233 [2024-11-26 18:27:49.527668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:56.233 [2024-11-26 18:27:49.528091] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:56.233 [2024-11-26 18:27:49.536685] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:56.233 [2024-11-26 18:27:49.537051] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:56.233 [2024-11-26 18:27:49.540631] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.233 18:27:49 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:56.233 [2024-11-26 18:27:49.549771] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:23:56.233 request: 00:23:56.233 { 00:23:56.233 "ublk_id": 0, 00:23:56.233 "method": "ublk_stop_disk", 00:23:56.233 "req_id": 1 00:23:56.233 } 00:23:56.233 Got JSON-RPC error response 00:23:56.233 response: 00:23:56.233 { 00:23:56.233 "code": -19, 00:23:56.233 "message": "No such device" 00:23:56.233 } 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.233 18:27:49 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.233 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:56.505 [2024-11-26 18:27:49.566745] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:56.505 [2024-11-26 18:27:49.573652] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:56.505 [2024-11-26 18:27:49.573702] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:56.505 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.505 18:27:49 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:56.505 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.505 18:27:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:57.089 18:27:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.089 18:27:50 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:23:57.089 18:27:50 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:57.089 18:27:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.089 18:27:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:57.089 18:27:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.089 18:27:50 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:23:57.089 18:27:50 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:23:57.089 18:27:50 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:23:57.089 18:27:50 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:23:57.089 18:27:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.089 18:27:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:57.089 18:27:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.089 18:27:50 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:23:57.089 18:27:50 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:23:57.089 ************************************ 00:23:57.089 END TEST test_create_ublk 00:23:57.089 ************************************ 00:23:57.089 18:27:50 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:23:57.089 00:23:57.089 real 0m11.786s 00:23:57.089 user 0m0.609s 00:23:57.089 sys 0m1.051s 00:23:57.089 18:27:50 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.089 18:27:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:57.349 18:27:50 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:23:57.349 18:27:50 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:57.349 18:27:50 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.349 18:27:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:23:57.349 ************************************ 00:23:57.349 START TEST test_create_multi_ublk 00:23:57.349 ************************************ 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:57.349 [2024-11-26 18:27:50.489638] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:23:57.349 [2024-11-26 18:27:50.492348] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.349 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:57.610 [2024-11-26 18:27:50.784803] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:23:57.610 [2024-11-26 18:27:50.785251] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:23:57.610 [2024-11-26 18:27:50.785271] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:23:57.610 [2024-11-26 18:27:50.785283] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:23:57.610 [2024-11-26 18:27:50.800660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:57.610 [2024-11-26 18:27:50.800697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:57.610 [2024-11-26 18:27:50.807629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:57.610 [2024-11-26 18:27:50.808317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:23:57.610 [2024-11-26 18:27:50.818729] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.610 18:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:57.871 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.871 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:23:57.871 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:23:57.871 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.871 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:57.871 [2024-11-26 18:27:51.110779] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:23:57.871 [2024-11-26 18:27:51.111195] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:23:57.871 [2024-11-26 18:27:51.111214] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:23:57.871 [2024-11-26 18:27:51.111221] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:23:57.871 [2024-11-26 18:27:51.118649] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:57.871 [2024-11-26 18:27:51.118673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:57.872 [2024-11-26 18:27:51.126647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:57.872 [2024-11-26 18:27:51.127203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:23:57.872 [2024-11-26 18:27:51.143684] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:23:57.872 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.872 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:23:57.872 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:57.872 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:23:57.872 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.872 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:58.132 [2024-11-26 18:27:51.439784] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:23:58.132 [2024-11-26 18:27:51.440168] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:23:58.132 [2024-11-26 18:27:51.440184] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:23:58.132 [2024-11-26 18:27:51.440193] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:23:58.132 [2024-11-26 18:27:51.447948] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:58.132 [2024-11-26 18:27:51.447976] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:58.132 [2024-11-26 18:27:51.454674] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:58.132 [2024-11-26 18:27:51.455244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:23:58.132 [2024-11-26 18:27:51.458032] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.132 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:58.702 [2024-11-26 18:27:51.738788] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:23:58.702 [2024-11-26 18:27:51.739166] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:23:58.702 [2024-11-26 18:27:51.739185] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:23:58.702 [2024-11-26 18:27:51.739192] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:23:58.702 [2024-11-26 18:27:51.749670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:23:58.702 [2024-11-26 18:27:51.749695] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:23:58.702 [2024-11-26 18:27:51.756679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:23:58.702 [2024-11-26 18:27:51.757256] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:23:58.702 [2024-11-26 18:27:51.765706] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:23:58.702 { 00:23:58.702 "ublk_device": "/dev/ublkb0", 00:23:58.702 "id": 0, 00:23:58.702 "queue_depth": 512, 00:23:58.702 "num_queues": 4, 00:23:58.702 "bdev_name": "Malloc0" 00:23:58.702 }, 00:23:58.702 { 00:23:58.702 "ublk_device": "/dev/ublkb1", 00:23:58.702 "id": 1, 00:23:58.702 "queue_depth": 512, 00:23:58.702 "num_queues": 4, 00:23:58.702 "bdev_name": "Malloc1" 00:23:58.702 }, 00:23:58.702 { 00:23:58.702 "ublk_device": "/dev/ublkb2", 00:23:58.702 "id": 2, 00:23:58.702 "queue_depth": 512, 00:23:58.702 "num_queues": 4, 00:23:58.702 "bdev_name": "Malloc2" 00:23:58.702 }, 00:23:58.702 { 00:23:58.702 "ublk_device": "/dev/ublkb3", 00:23:58.702 "id": 3, 00:23:58.702 "queue_depth": 512, 00:23:58.702 "num_queues": 4, 00:23:58.702 "bdev_name": "Malloc3" 00:23:58.702 } 00:23:58.702 ]' 00:23:58.702 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:23:58.703 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:58.703 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:23:58.703 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:23:58.703 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:23:58.703 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:23:58.703 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:23:58.703 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:58.703 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:23:58.703 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:58.703 18:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:23:58.703 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:23:58.703 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:58.703 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:58.962 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:23:59.221 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:23:59.222 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:23:59.481 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:23:59.481 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.482 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:59.482 [2024-11-26 18:27:52.762805] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:23:59.741 [2024-11-26 18:27:52.817653] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:59.741 [2024-11-26 18:27:52.818594] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:23:59.741 [2024-11-26 18:27:52.824655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:59.741 [2024-11-26 18:27:52.825003] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:23:59.742 [2024-11-26 18:27:52.825024] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:59.742 [2024-11-26 18:27:52.832729] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:59.742 [2024-11-26 18:27:52.878678] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:59.742 [2024-11-26 18:27:52.879477] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:59.742 [2024-11-26 18:27:52.887657] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:59.742 [2024-11-26 18:27:52.887946] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:59.742 [2024-11-26 18:27:52.887965] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:59.742 [2024-11-26 18:27:52.894769] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:23:59.742 [2024-11-26 18:27:52.945076] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:59.742 [2024-11-26 18:27:52.946054] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:23:59.742 [2024-11-26 18:27:52.952660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:59.742 [2024-11-26 18:27:52.952957] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:23:59.742 [2024-11-26 18:27:52.952976] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.742 18:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:23:59.742 [2024-11-26 18:27:52.968751] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:23:59.742 [2024-11-26 18:27:53.002068] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:59.742 [2024-11-26 18:27:53.002884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:23:59.742 [2024-11-26 18:27:53.008678] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:59.742 [2024-11-26 18:27:53.008987] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:23:59.742 [2024-11-26 18:27:53.009005] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:23:59.742 18:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.742 18:27:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:24:00.001 [2024-11-26 18:27:53.250760] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:00.001 [2024-11-26 18:27:53.258641] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:00.001 [2024-11-26 18:27:53.258690] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:00.001 18:27:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:24:00.001 18:27:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:00.001 18:27:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:00.001 18:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.001 18:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:00.936 18:27:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.936 18:27:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:00.936 18:27:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:00.936 18:27:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.936 18:27:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:01.194 18:27:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.194 18:27:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:01.194 18:27:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:24:01.194 18:27:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.194 18:27:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:01.452 18:27:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.452 18:27:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:01.452 18:27:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:24:01.452 18:27:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.452 18:27:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:24:02.019 ************************************ 00:24:02.019 END TEST test_create_multi_ublk 00:24:02.019 ************************************ 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:02.019 00:24:02.019 real 0m4.755s 00:24:02.019 user 0m1.165s 00:24:02.019 sys 0m0.224s 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.019 18:27:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:02.019 18:27:55 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:02.019 18:27:55 ublk -- ublk/ublk.sh@147 -- # cleanup 00:24:02.019 18:27:55 ublk -- ublk/ublk.sh@130 -- # killprocess 76080 00:24:02.019 18:27:55 ublk -- common/autotest_common.sh@954 -- # '[' -z 76080 ']' 00:24:02.019 18:27:55 ublk -- common/autotest_common.sh@958 -- # kill -0 76080 00:24:02.019 18:27:55 ublk -- common/autotest_common.sh@959 -- # uname 00:24:02.019 18:27:55 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.019 18:27:55 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76080 00:24:02.019 18:27:55 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.019 18:27:55 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.019 18:27:55 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76080' 00:24:02.019 killing process with pid 76080 00:24:02.019 18:27:55 ublk -- common/autotest_common.sh@973 -- # kill 76080 00:24:02.019 18:27:55 ublk -- common/autotest_common.sh@978 -- # wait 76080 00:24:03.399 [2024-11-26 18:27:56.491500] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:24:03.399 [2024-11-26 18:27:56.491667] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:24:04.781 00:24:04.781 real 0m31.098s 00:24:04.781 user 0m44.411s 00:24:04.781 sys 0m10.566s 00:24:04.781 18:27:57 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.781 ************************************ 00:24:04.781 END TEST ublk 00:24:04.781 ************************************ 00:24:04.781 18:27:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:04.781 18:27:57 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:04.781 18:27:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:04.781 18:27:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.781 18:27:57 -- common/autotest_common.sh@10 -- # set +x 00:24:04.781 ************************************ 00:24:04.781 START TEST ublk_recovery 00:24:04.781 ************************************ 00:24:04.781 18:27:57 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:04.781 * Looking for test storage... 00:24:04.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:24:04.781 18:27:57 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:04.781 18:27:57 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:04.781 18:27:57 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:04.781 18:27:58 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.781 18:27:58 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:24:04.781 18:27:58 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.781 18:27:58 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:04.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.781 --rc genhtml_branch_coverage=1 00:24:04.781 --rc genhtml_function_coverage=1 00:24:04.781 --rc genhtml_legend=1 00:24:04.781 --rc geninfo_all_blocks=1 00:24:04.781 --rc geninfo_unexecuted_blocks=1 00:24:04.781 00:24:04.781 ' 00:24:04.781 18:27:58 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:04.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.781 --rc genhtml_branch_coverage=1 00:24:04.781 --rc genhtml_function_coverage=1 00:24:04.781 --rc genhtml_legend=1 00:24:04.781 --rc geninfo_all_blocks=1 00:24:04.781 --rc geninfo_unexecuted_blocks=1 00:24:04.781 00:24:04.781 ' 00:24:04.781 18:27:58 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:04.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.781 --rc genhtml_branch_coverage=1 00:24:04.781 --rc genhtml_function_coverage=1 00:24:04.781 --rc genhtml_legend=1 00:24:04.781 --rc geninfo_all_blocks=1 00:24:04.781 --rc geninfo_unexecuted_blocks=1 00:24:04.781 00:24:04.781 ' 00:24:04.781 18:27:58 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:04.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.781 --rc genhtml_branch_coverage=1 00:24:04.781 --rc genhtml_function_coverage=1 00:24:04.781 --rc genhtml_legend=1 00:24:04.781 --rc geninfo_all_blocks=1 00:24:04.781 --rc geninfo_unexecuted_blocks=1 00:24:04.781 00:24:04.781 ' 00:24:04.781 18:27:58 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:24:04.781 18:27:58 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:24:04.781 18:27:58 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:24:04.781 18:27:58 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:24:04.781 18:27:58 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:24:04.781 18:27:58 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:24:04.782 18:27:58 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:24:04.782 18:27:58 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:24:04.782 18:27:58 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:24:04.782 18:27:58 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:24:04.782 18:27:58 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76506 00:24:04.782 18:27:58 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:04.782 18:27:58 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:04.782 18:27:58 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76506 00:24:04.782 18:27:58 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76506 ']' 00:24:04.782 18:27:58 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.782 18:27:58 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.782 18:27:58 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.782 18:27:58 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.782 18:27:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:05.042 [2024-11-26 18:27:58.145502] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:24:05.042 [2024-11-26 18:27:58.146167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76506 ] 00:24:05.042 [2024-11-26 18:27:58.321610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:05.302 [2024-11-26 18:27:58.433504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.302 [2024-11-26 18:27:58.433555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:06.244 18:27:59 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.244 [2024-11-26 18:27:59.294637] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:06.244 [2024-11-26 18:27:59.297275] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.244 18:27:59 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.244 malloc0 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.244 18:27:59 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.244 [2024-11-26 18:27:59.449765] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:24:06.244 [2024-11-26 18:27:59.449875] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:24:06.244 [2024-11-26 18:27:59.449887] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:06.244 [2024-11-26 18:27:59.449896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:06.244 [2024-11-26 18:27:59.458731] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:06.244 [2024-11-26 18:27:59.458751] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:06.244 [2024-11-26 18:27:59.465653] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:06.244 [2024-11-26 18:27:59.465790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:06.244 [2024-11-26 18:27:59.480659] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:06.244 1 00:24:06.244 18:27:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.244 18:27:59 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:24:07.184 18:28:00 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76547 00:24:07.184 18:28:00 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:24:07.184 18:28:00 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:24:07.444 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:07.444 fio-3.35 00:24:07.444 Starting 1 process 00:24:12.771 18:28:05 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76506 00:24:12.771 18:28:05 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:24:18.083 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76506 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:24:18.083 18:28:10 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76651 00:24:18.083 18:28:10 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:18.083 18:28:10 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:18.083 18:28:10 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76651 00:24:18.083 18:28:10 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76651 ']' 00:24:18.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.083 18:28:10 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.083 18:28:10 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.083 18:28:10 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.083 18:28:10 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.083 18:28:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.083 [2024-11-26 18:28:10.603778] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:24:18.083 [2024-11-26 18:28:10.603971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76651 ] 00:24:18.083 [2024-11-26 18:28:10.767028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:18.083 [2024-11-26 18:28:10.880918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.084 [2024-11-26 18:28:10.880965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:24:18.652 18:28:11 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.652 [2024-11-26 18:28:11.772644] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:18.652 [2024-11-26 18:28:11.775273] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.652 18:28:11 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.652 malloc0 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.652 18:28:11 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.652 [2024-11-26 18:28:11.923798] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:24:18.652 [2024-11-26 18:28:11.923838] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:18.652 [2024-11-26 18:28:11.923848] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:18.652 [2024-11-26 18:28:11.931689] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:18.652 [2024-11-26 18:28:11.931726] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:24:18.652 [2024-11-26 18:28:11.931738] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:24:18.652 [2024-11-26 18:28:11.931839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:24:18.652 1 00:24:18.652 18:28:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.652 18:28:11 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76547 00:24:18.652 [2024-11-26 18:28:11.939675] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:24:18.652 [2024-11-26 18:28:11.946103] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:24:18.652 [2024-11-26 18:28:11.953875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:24:18.652 [2024-11-26 18:28:11.953904] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:25:14.901 00:25:14.901 fio_test: (groupid=0, jobs=1): err= 0: pid=76550: Tue Nov 26 18:29:00 2024 00:25:14.901 read: IOPS=20.8k, BW=81.3MiB/s (85.2MB/s)(4877MiB/60001msec) 00:25:14.901 slat (nsec): min=1148, max=485518, avg=7712.01, stdev=3081.73 00:25:14.901 clat (usec): min=966, max=6465.9k, avg=3043.42, stdev=47688.82 00:25:14.901 lat (usec): min=971, max=6465.9k, avg=3051.13, stdev=47688.83 00:25:14.901 clat percentiles (usec): 00:25:14.901 | 1.00th=[ 1991], 5.00th=[ 2114], 10.00th=[ 2180], 20.00th=[ 2245], 00:25:14.901 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2606], 00:25:14.901 | 70.00th=[ 2900], 80.00th=[ 3032], 90.00th=[ 3458], 95.00th=[ 3884], 00:25:14.901 | 99.00th=[ 5211], 99.50th=[ 5735], 99.90th=[ 6915], 99.95th=[ 7308], 00:25:14.901 | 99.99th=[13435] 00:25:14.901 bw ( KiB/s): min=27768, max=108600, per=100.00%, avg=92632.69, stdev=14586.26, samples=107 00:25:14.901 iops : min= 6942, max=27150, avg=23158.15, stdev=3646.59, samples=107 00:25:14.901 write: IOPS=20.8k, BW=81.2MiB/s (85.2MB/s)(4873MiB/60001msec); 0 zone resets 00:25:14.901 slat (nsec): min=1267, max=309162, avg=7811.08, stdev=3065.46 00:25:14.901 clat (usec): min=941, max=6466.0k, avg=3093.22, stdev=44817.68 00:25:14.901 lat (usec): min=950, max=6466.0k, avg=3101.03, stdev=44817.69 00:25:14.901 clat percentiles (usec): 00:25:14.901 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:25:14.901 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2573], 60.00th=[ 2704], 00:25:14.902 | 70.00th=[ 2999], 80.00th=[ 3163], 90.00th=[ 3556], 95.00th=[ 3982], 00:25:14.902 | 99.00th=[ 5211], 99.50th=[ 5800], 99.90th=[ 6915], 99.95th=[ 7242], 00:25:14.902 | 99.99th=[13566] 00:25:14.902 bw ( KiB/s): min=28400, max=109840, per=100.00%, avg=92553.58, stdev=14526.46, samples=107 00:25:14.902 iops : min= 7100, max=27460, avg=23138.36, stdev=3631.63, samples=107 00:25:14.902 lat (usec) : 1000=0.01% 00:25:14.902 lat (msec) : 2=1.24%, 4=94.37%, 10=4.37%, 20=0.02%, >=2000=0.01% 00:25:14.902 cpu : usr=8.92%, sys=32.83%, ctx=105872, majf=0, minf=13 00:25:14.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:25:14.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:14.902 issued rwts: total=1248542,1247422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:14.902 00:25:14.902 Run status group 0 (all jobs): 00:25:14.902 READ: bw=81.3MiB/s (85.2MB/s), 81.3MiB/s-81.3MiB/s (85.2MB/s-85.2MB/s), io=4877MiB (5114MB), run=60001-60001msec 00:25:14.902 WRITE: bw=81.2MiB/s (85.2MB/s), 81.2MiB/s-81.2MiB/s (85.2MB/s-85.2MB/s), io=4873MiB (5109MB), run=60001-60001msec 00:25:14.902 00:25:14.902 Disk stats (read/write): 00:25:14.902 ublkb1: ios=1246084/1245018, merge=0/0, ticks=3698038/3624081, in_queue=7322120, util=99.94% 00:25:14.902 18:29:00 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.902 [2024-11-26 18:29:00.770706] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:25:14.902 [2024-11-26 18:29:00.798750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:14.902 [2024-11-26 18:29:00.798942] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:25:14.902 [2024-11-26 18:29:00.807695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:14.902 [2024-11-26 18:29:00.807842] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:25:14.902 [2024-11-26 18:29:00.807856] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.902 18:29:00 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.902 [2024-11-26 18:29:00.815752] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:14.902 [2024-11-26 18:29:00.823352] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:14.902 [2024-11-26 18:29:00.823405] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.902 18:29:00 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:25:14.902 18:29:00 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:25:14.902 18:29:00 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76651 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76651 ']' 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76651 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76651 00:25:14.902 killing process with pid 76651 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76651' 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76651 00:25:14.902 18:29:00 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76651 00:25:14.902 [2024-11-26 18:29:03.410981] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:14.902 [2024-11-26 18:29:03.411058] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:14.902 ************************************ 00:25:14.902 END TEST ublk_recovery 00:25:14.902 ************************************ 00:25:14.902 00:25:14.902 real 1m7.535s 00:25:14.902 user 1m51.257s 00:25:14.902 sys 0m37.947s 00:25:14.902 18:29:05 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.902 18:29:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.902 18:29:05 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:25:14.902 18:29:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:25:14.902 18:29:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.902 18:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:14.902 18:29:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:25:14.902 18:29:05 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:14.902 18:29:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:14.902 18:29:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.902 18:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:14.902 ************************************ 00:25:14.902 START TEST ftl 00:25:14.902 ************************************ 00:25:14.902 18:29:05 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:14.902 * Looking for test storage... 00:25:14.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:14.902 18:29:05 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:14.902 18:29:05 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:25:14.902 18:29:05 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:14.902 18:29:05 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:14.902 18:29:05 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.902 18:29:05 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.902 18:29:05 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.902 18:29:05 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.902 18:29:05 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.902 18:29:05 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.902 18:29:05 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.902 18:29:05 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.902 18:29:05 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.902 18:29:05 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.902 18:29:05 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.902 18:29:05 ftl -- scripts/common.sh@344 -- # case "$op" in 00:25:14.902 18:29:05 ftl -- scripts/common.sh@345 -- # : 1 00:25:14.902 18:29:05 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.902 18:29:05 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.902 18:29:05 ftl -- scripts/common.sh@365 -- # decimal 1 00:25:14.902 18:29:05 ftl -- scripts/common.sh@353 -- # local d=1 00:25:14.902 18:29:05 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.902 18:29:05 ftl -- scripts/common.sh@355 -- # echo 1 00:25:14.902 18:29:05 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.902 18:29:05 ftl -- scripts/common.sh@366 -- # decimal 2 00:25:14.902 18:29:05 ftl -- scripts/common.sh@353 -- # local d=2 00:25:14.902 18:29:05 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.902 18:29:05 ftl -- scripts/common.sh@355 -- # echo 2 00:25:14.902 18:29:05 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.902 18:29:05 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.902 18:29:05 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.902 18:29:05 ftl -- scripts/common.sh@368 -- # return 0 00:25:14.902 18:29:05 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.902 18:29:05 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:14.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.902 --rc genhtml_branch_coverage=1 00:25:14.902 --rc genhtml_function_coverage=1 00:25:14.902 --rc genhtml_legend=1 00:25:14.902 --rc geninfo_all_blocks=1 00:25:14.902 --rc geninfo_unexecuted_blocks=1 00:25:14.902 00:25:14.902 ' 00:25:14.902 18:29:05 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:14.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.902 --rc genhtml_branch_coverage=1 00:25:14.902 --rc genhtml_function_coverage=1 00:25:14.902 --rc genhtml_legend=1 00:25:14.902 --rc geninfo_all_blocks=1 00:25:14.902 --rc geninfo_unexecuted_blocks=1 00:25:14.902 00:25:14.902 ' 00:25:14.902 18:29:05 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:14.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.902 --rc genhtml_branch_coverage=1 00:25:14.902 --rc genhtml_function_coverage=1 00:25:14.902 --rc genhtml_legend=1 00:25:14.902 --rc geninfo_all_blocks=1 00:25:14.902 --rc geninfo_unexecuted_blocks=1 00:25:14.902 00:25:14.902 ' 00:25:14.902 18:29:05 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:14.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.902 --rc genhtml_branch_coverage=1 00:25:14.902 --rc genhtml_function_coverage=1 00:25:14.902 --rc genhtml_legend=1 00:25:14.902 --rc geninfo_all_blocks=1 00:25:14.902 --rc geninfo_unexecuted_blocks=1 00:25:14.902 00:25:14.902 ' 00:25:14.902 18:29:05 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:14.902 18:29:05 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:14.902 18:29:05 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:14.902 18:29:05 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:14.902 18:29:05 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:14.903 18:29:05 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:14.903 18:29:05 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:14.903 18:29:05 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:14.903 18:29:05 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:14.903 18:29:05 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.903 18:29:05 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.903 18:29:05 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:14.903 18:29:05 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:14.903 18:29:05 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:14.903 18:29:05 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:14.903 18:29:05 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:14.903 18:29:05 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:14.903 18:29:05 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.903 18:29:05 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.903 18:29:05 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:14.903 18:29:05 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:14.903 18:29:05 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:14.903 18:29:05 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:14.903 18:29:05 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:14.903 18:29:05 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:14.903 18:29:05 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:14.903 18:29:05 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:14.903 18:29:05 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:14.903 18:29:05 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:14.903 18:29:05 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:14.903 18:29:05 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:25:14.903 18:29:05 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:25:14.903 18:29:05 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:25:14.903 18:29:05 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:25:14.903 18:29:05 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:14.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:14.903 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:14.903 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:14.903 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:14.903 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:14.903 18:29:06 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:25:14.903 18:29:06 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77475 00:25:14.903 18:29:06 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77475 00:25:14.903 18:29:06 ftl -- common/autotest_common.sh@835 -- # '[' -z 77475 ']' 00:25:14.903 18:29:06 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.903 18:29:06 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.903 18:29:06 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.903 18:29:06 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.903 18:29:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:14.903 [2024-11-26 18:29:06.678828] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:25:14.903 [2024-11-26 18:29:06.679037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77475 ] 00:25:14.903 [2024-11-26 18:29:06.859178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.903 [2024-11-26 18:29:06.986952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.903 18:29:07 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.903 18:29:07 ftl -- common/autotest_common.sh@868 -- # return 0 00:25:14.903 18:29:07 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:25:14.903 18:29:07 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:25:15.841 18:29:08 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:25:15.841 18:29:08 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:16.409 18:29:09 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:25:16.410 18:29:09 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:16.410 18:29:09 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:16.668 18:29:09 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:25:16.668 18:29:09 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:25:16.668 18:29:09 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:25:16.668 18:29:09 ftl -- ftl/ftl.sh@50 -- # break 00:25:16.668 18:29:09 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:25:16.668 18:29:09 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:25:16.668 18:29:09 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:16.668 18:29:09 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:16.928 18:29:10 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:25:16.928 18:29:10 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:25:16.928 18:29:10 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:25:16.928 18:29:10 ftl -- ftl/ftl.sh@63 -- # break 00:25:16.928 18:29:10 ftl -- ftl/ftl.sh@66 -- # killprocess 77475 00:25:16.928 18:29:10 ftl -- common/autotest_common.sh@954 -- # '[' -z 77475 ']' 00:25:16.928 18:29:10 ftl -- common/autotest_common.sh@958 -- # kill -0 77475 00:25:16.928 18:29:10 ftl -- common/autotest_common.sh@959 -- # uname 00:25:16.928 18:29:10 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.928 18:29:10 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77475 00:25:16.928 18:29:10 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:16.928 killing process with pid 77475 00:25:16.928 18:29:10 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:16.928 18:29:10 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77475' 00:25:16.928 18:29:10 ftl -- common/autotest_common.sh@973 -- # kill 77475 00:25:16.928 18:29:10 ftl -- common/autotest_common.sh@978 -- # wait 77475 00:25:20.252 18:29:12 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:25:20.252 18:29:12 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:20.252 18:29:12 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:20.252 18:29:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.252 18:29:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:20.252 ************************************ 00:25:20.252 START TEST ftl_fio_basic 00:25:20.252 ************************************ 00:25:20.252 18:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:20.252 * Looking for test storage... 00:25:20.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.252 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:20.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.252 --rc genhtml_branch_coverage=1 00:25:20.253 --rc genhtml_function_coverage=1 00:25:20.253 --rc genhtml_legend=1 00:25:20.253 --rc geninfo_all_blocks=1 00:25:20.253 --rc geninfo_unexecuted_blocks=1 00:25:20.253 00:25:20.253 ' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:20.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.253 --rc genhtml_branch_coverage=1 00:25:20.253 --rc genhtml_function_coverage=1 00:25:20.253 --rc genhtml_legend=1 00:25:20.253 --rc geninfo_all_blocks=1 00:25:20.253 --rc geninfo_unexecuted_blocks=1 00:25:20.253 00:25:20.253 ' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:20.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.253 --rc genhtml_branch_coverage=1 00:25:20.253 --rc genhtml_function_coverage=1 00:25:20.253 --rc genhtml_legend=1 00:25:20.253 --rc geninfo_all_blocks=1 00:25:20.253 --rc geninfo_unexecuted_blocks=1 00:25:20.253 00:25:20.253 ' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:20.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.253 --rc genhtml_branch_coverage=1 00:25:20.253 --rc genhtml_function_coverage=1 00:25:20.253 --rc genhtml_legend=1 00:25:20.253 --rc geninfo_all_blocks=1 00:25:20.253 --rc geninfo_unexecuted_blocks=1 00:25:20.253 00:25:20.253 ' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77624 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77624 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:25:20.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77624 ']' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.253 18:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:20.253 [2024-11-26 18:29:13.338696] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:25:20.253 [2024-11-26 18:29:13.338899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77624 ] 00:25:20.253 [2024-11-26 18:29:13.521189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:20.513 [2024-11-26 18:29:13.657251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.513 [2024-11-26 18:29:13.657403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.513 [2024-11-26 18:29:13.657437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.451 18:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.451 18:29:14 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:25:21.451 18:29:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:21.451 18:29:14 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:25:21.451 18:29:14 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:21.451 18:29:14 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:25:21.451 18:29:14 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:25:21.451 18:29:14 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:21.710 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:21.710 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:25:21.710 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:21.710 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:21.710 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:21.710 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:21.710 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:21.710 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:21.969 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:21.969 { 00:25:21.969 "name": "nvme0n1", 00:25:21.969 "aliases": [ 00:25:21.969 "4a2b0ab7-a1e5-4bee-8002-09280bc01a23" 00:25:21.969 ], 00:25:21.969 "product_name": "NVMe disk", 00:25:21.969 "block_size": 4096, 00:25:21.969 "num_blocks": 1310720, 00:25:21.969 "uuid": "4a2b0ab7-a1e5-4bee-8002-09280bc01a23", 00:25:21.969 "numa_id": -1, 00:25:21.969 "assigned_rate_limits": { 00:25:21.969 "rw_ios_per_sec": 0, 00:25:21.969 "rw_mbytes_per_sec": 0, 00:25:21.969 "r_mbytes_per_sec": 0, 00:25:21.969 "w_mbytes_per_sec": 0 00:25:21.969 }, 00:25:21.969 "claimed": false, 00:25:21.969 "zoned": false, 00:25:21.969 "supported_io_types": { 00:25:21.969 "read": true, 00:25:21.969 "write": true, 00:25:21.969 "unmap": true, 00:25:21.969 "flush": true, 00:25:21.969 "reset": true, 00:25:21.969 "nvme_admin": true, 00:25:21.969 "nvme_io": true, 00:25:21.969 "nvme_io_md": false, 00:25:21.969 "write_zeroes": true, 00:25:21.969 "zcopy": false, 00:25:21.969 "get_zone_info": false, 00:25:21.969 "zone_management": false, 00:25:21.969 "zone_append": false, 00:25:21.969 "compare": true, 00:25:21.969 "compare_and_write": false, 00:25:21.969 "abort": true, 00:25:21.969 "seek_hole": false, 00:25:21.969 "seek_data": false, 00:25:21.969 "copy": true, 00:25:21.969 "nvme_iov_md": false 00:25:21.969 }, 00:25:21.969 "driver_specific": { 00:25:21.969 "nvme": [ 00:25:21.969 { 00:25:21.969 "pci_address": "0000:00:11.0", 00:25:21.969 "trid": { 00:25:21.969 "trtype": "PCIe", 00:25:21.969 "traddr": "0000:00:11.0" 00:25:21.969 }, 00:25:21.969 "ctrlr_data": { 00:25:21.969 "cntlid": 0, 00:25:21.969 "vendor_id": "0x1b36", 00:25:21.969 "model_number": "QEMU NVMe Ctrl", 00:25:21.969 "serial_number": "12341", 00:25:21.969 "firmware_revision": "8.0.0", 00:25:21.969 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:21.969 "oacs": { 00:25:21.969 "security": 0, 00:25:21.969 "format": 1, 00:25:21.969 "firmware": 0, 00:25:21.969 "ns_manage": 1 00:25:21.969 }, 00:25:21.969 "multi_ctrlr": false, 00:25:21.969 "ana_reporting": false 00:25:21.969 }, 00:25:21.969 "vs": { 00:25:21.969 "nvme_version": "1.4" 00:25:21.969 }, 00:25:21.969 "ns_data": { 00:25:21.969 "id": 1, 00:25:21.969 "can_share": false 00:25:21.969 } 00:25:21.969 } 00:25:21.969 ], 00:25:21.969 "mp_policy": "active_passive" 00:25:21.969 } 00:25:21.969 } 00:25:21.969 ]' 00:25:21.969 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:21.969 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:21.969 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:22.229 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:22.229 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:22.229 18:29:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:25:22.229 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:25:22.229 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:22.229 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:25:22.229 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:22.229 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:22.489 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:25:22.489 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:22.750 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=fc7baa71-15f9-4cdd-9520-fc5feea02b38 00:25:22.750 18:29:15 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fc7baa71-15f9-4cdd-9520-fc5feea02b38 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:23.011 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:23.271 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:23.271 { 00:25:23.271 "name": "22d45149-8404-4a33-af06-8ba5aa6e472d", 00:25:23.271 "aliases": [ 00:25:23.271 "lvs/nvme0n1p0" 00:25:23.271 ], 00:25:23.271 "product_name": "Logical Volume", 00:25:23.271 "block_size": 4096, 00:25:23.271 "num_blocks": 26476544, 00:25:23.271 "uuid": "22d45149-8404-4a33-af06-8ba5aa6e472d", 00:25:23.271 "assigned_rate_limits": { 00:25:23.271 "rw_ios_per_sec": 0, 00:25:23.271 "rw_mbytes_per_sec": 0, 00:25:23.271 "r_mbytes_per_sec": 0, 00:25:23.271 "w_mbytes_per_sec": 0 00:25:23.271 }, 00:25:23.271 "claimed": false, 00:25:23.271 "zoned": false, 00:25:23.271 "supported_io_types": { 00:25:23.271 "read": true, 00:25:23.271 "write": true, 00:25:23.271 "unmap": true, 00:25:23.271 "flush": false, 00:25:23.271 "reset": true, 00:25:23.271 "nvme_admin": false, 00:25:23.271 "nvme_io": false, 00:25:23.271 "nvme_io_md": false, 00:25:23.271 "write_zeroes": true, 00:25:23.271 "zcopy": false, 00:25:23.271 "get_zone_info": false, 00:25:23.271 "zone_management": false, 00:25:23.271 "zone_append": false, 00:25:23.271 "compare": false, 00:25:23.271 "compare_and_write": false, 00:25:23.271 "abort": false, 00:25:23.271 "seek_hole": true, 00:25:23.271 "seek_data": true, 00:25:23.271 "copy": false, 00:25:23.271 "nvme_iov_md": false 00:25:23.271 }, 00:25:23.271 "driver_specific": { 00:25:23.271 "lvol": { 00:25:23.271 "lvol_store_uuid": "fc7baa71-15f9-4cdd-9520-fc5feea02b38", 00:25:23.271 "base_bdev": "nvme0n1", 00:25:23.271 "thin_provision": true, 00:25:23.271 "num_allocated_clusters": 0, 00:25:23.271 "snapshot": false, 00:25:23.271 "clone": false, 00:25:23.271 "esnap_clone": false 00:25:23.271 } 00:25:23.271 } 00:25:23.271 } 00:25:23.272 ]' 00:25:23.272 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:23.272 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:23.272 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:23.272 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:23.272 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:23.272 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:23.272 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:25:23.272 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:25:23.272 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:23.531 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:23.531 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:23.531 18:29:16 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:23.531 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:23.531 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:23.531 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:23.531 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:23.531 18:29:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:24.101 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:24.101 { 00:25:24.101 "name": "22d45149-8404-4a33-af06-8ba5aa6e472d", 00:25:24.101 "aliases": [ 00:25:24.101 "lvs/nvme0n1p0" 00:25:24.101 ], 00:25:24.101 "product_name": "Logical Volume", 00:25:24.101 "block_size": 4096, 00:25:24.101 "num_blocks": 26476544, 00:25:24.101 "uuid": "22d45149-8404-4a33-af06-8ba5aa6e472d", 00:25:24.101 "assigned_rate_limits": { 00:25:24.101 "rw_ios_per_sec": 0, 00:25:24.101 "rw_mbytes_per_sec": 0, 00:25:24.101 "r_mbytes_per_sec": 0, 00:25:24.101 "w_mbytes_per_sec": 0 00:25:24.101 }, 00:25:24.101 "claimed": false, 00:25:24.101 "zoned": false, 00:25:24.101 "supported_io_types": { 00:25:24.101 "read": true, 00:25:24.101 "write": true, 00:25:24.101 "unmap": true, 00:25:24.101 "flush": false, 00:25:24.101 "reset": true, 00:25:24.101 "nvme_admin": false, 00:25:24.101 "nvme_io": false, 00:25:24.101 "nvme_io_md": false, 00:25:24.101 "write_zeroes": true, 00:25:24.101 "zcopy": false, 00:25:24.101 "get_zone_info": false, 00:25:24.101 "zone_management": false, 00:25:24.101 "zone_append": false, 00:25:24.101 "compare": false, 00:25:24.101 "compare_and_write": false, 00:25:24.101 "abort": false, 00:25:24.101 "seek_hole": true, 00:25:24.101 "seek_data": true, 00:25:24.101 "copy": false, 00:25:24.101 "nvme_iov_md": false 00:25:24.101 }, 00:25:24.101 "driver_specific": { 00:25:24.101 "lvol": { 00:25:24.101 "lvol_store_uuid": "fc7baa71-15f9-4cdd-9520-fc5feea02b38", 00:25:24.101 "base_bdev": "nvme0n1", 00:25:24.101 "thin_provision": true, 00:25:24.101 "num_allocated_clusters": 0, 00:25:24.101 "snapshot": false, 00:25:24.101 "clone": false, 00:25:24.101 "esnap_clone": false 00:25:24.101 } 00:25:24.101 } 00:25:24.101 } 00:25:24.101 ]' 00:25:24.101 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:24.101 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:24.101 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:24.101 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:24.101 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:24.101 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:24.101 18:29:17 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:25:24.101 18:29:17 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:24.362 18:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:25:24.362 18:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:25:24.362 18:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:25:24.362 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:25:24.362 18:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:24.362 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:24.362 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:24.362 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:25:24.362 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:25:24.362 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22d45149-8404-4a33-af06-8ba5aa6e472d 00:25:24.622 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:24.622 { 00:25:24.622 "name": "22d45149-8404-4a33-af06-8ba5aa6e472d", 00:25:24.622 "aliases": [ 00:25:24.622 "lvs/nvme0n1p0" 00:25:24.622 ], 00:25:24.622 "product_name": "Logical Volume", 00:25:24.622 "block_size": 4096, 00:25:24.622 "num_blocks": 26476544, 00:25:24.622 "uuid": "22d45149-8404-4a33-af06-8ba5aa6e472d", 00:25:24.622 "assigned_rate_limits": { 00:25:24.622 "rw_ios_per_sec": 0, 00:25:24.622 "rw_mbytes_per_sec": 0, 00:25:24.622 "r_mbytes_per_sec": 0, 00:25:24.622 "w_mbytes_per_sec": 0 00:25:24.622 }, 00:25:24.622 "claimed": false, 00:25:24.622 "zoned": false, 00:25:24.622 "supported_io_types": { 00:25:24.622 "read": true, 00:25:24.622 "write": true, 00:25:24.622 "unmap": true, 00:25:24.622 "flush": false, 00:25:24.622 "reset": true, 00:25:24.622 "nvme_admin": false, 00:25:24.622 "nvme_io": false, 00:25:24.622 "nvme_io_md": false, 00:25:24.622 "write_zeroes": true, 00:25:24.622 "zcopy": false, 00:25:24.622 "get_zone_info": false, 00:25:24.622 "zone_management": false, 00:25:24.622 "zone_append": false, 00:25:24.622 "compare": false, 00:25:24.622 "compare_and_write": false, 00:25:24.622 "abort": false, 00:25:24.622 "seek_hole": true, 00:25:24.622 "seek_data": true, 00:25:24.622 "copy": false, 00:25:24.622 "nvme_iov_md": false 00:25:24.622 }, 00:25:24.622 "driver_specific": { 00:25:24.622 "lvol": { 00:25:24.622 "lvol_store_uuid": "fc7baa71-15f9-4cdd-9520-fc5feea02b38", 00:25:24.622 "base_bdev": "nvme0n1", 00:25:24.622 "thin_provision": true, 00:25:24.622 "num_allocated_clusters": 0, 00:25:24.622 "snapshot": false, 00:25:24.622 "clone": false, 00:25:24.622 "esnap_clone": false 00:25:24.622 } 00:25:24.622 } 00:25:24.622 } 00:25:24.622 ]' 00:25:24.622 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:24.622 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:25:24.622 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:24.622 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:24.622 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:24.622 18:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:25:24.622 18:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:25:24.622 18:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:25:24.622 18:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 22d45149-8404-4a33-af06-8ba5aa6e472d -c nvc0n1p0 --l2p_dram_limit 60 00:25:24.883 [2024-11-26 18:29:17.977613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.977680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:24.884 [2024-11-26 18:29:17.977700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:24.884 [2024-11-26 18:29:17.977709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.977808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.977822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:24.884 [2024-11-26 18:29:17.977834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:24.884 [2024-11-26 18:29:17.977843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.977877] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:24.884 [2024-11-26 18:29:17.979113] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:24.884 [2024-11-26 18:29:17.979154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.979165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:24.884 [2024-11-26 18:29:17.979190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:25:24.884 [2024-11-26 18:29:17.979200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.979298] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2a8c365d-4e32-4708-81c8-8eee1b90e5b7 00:25:24.884 [2024-11-26 18:29:17.980925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.980966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:24.884 [2024-11-26 18:29:17.980977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:24.884 [2024-11-26 18:29:17.980988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.988813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.988851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:24.884 [2024-11-26 18:29:17.988864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.766 ms 00:25:24.884 [2024-11-26 18:29:17.988880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.988995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.989012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:24.884 [2024-11-26 18:29:17.989023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:24.884 [2024-11-26 18:29:17.989036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.989139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.989153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:24.884 [2024-11-26 18:29:17.989162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:24.884 [2024-11-26 18:29:17.989172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.989211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:24.884 [2024-11-26 18:29:17.994331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.994366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:24.884 [2024-11-26 18:29:17.994385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.138 ms 00:25:24.884 [2024-11-26 18:29:17.994394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.994447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.994457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:24.884 [2024-11-26 18:29:17.994469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:24.884 [2024-11-26 18:29:17.994478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.994537] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:24.884 [2024-11-26 18:29:17.994736] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:24.884 [2024-11-26 18:29:17.994760] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:24.884 [2024-11-26 18:29:17.994773] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:24.884 [2024-11-26 18:29:17.994789] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:24.884 [2024-11-26 18:29:17.994800] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:24.884 [2024-11-26 18:29:17.994812] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:24.884 [2024-11-26 18:29:17.994822] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:24.884 [2024-11-26 18:29:17.994832] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:24.884 [2024-11-26 18:29:17.994841] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:24.884 [2024-11-26 18:29:17.994857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.994866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:24.884 [2024-11-26 18:29:17.994880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:25:24.884 [2024-11-26 18:29:17.994888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.994997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.884 [2024-11-26 18:29:17.995013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:24.884 [2024-11-26 18:29:17.995025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:24.884 [2024-11-26 18:29:17.995035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.884 [2024-11-26 18:29:17.995162] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:24.884 [2024-11-26 18:29:17.995177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:24.884 [2024-11-26 18:29:17.995189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:24.884 [2024-11-26 18:29:17.995199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:24.884 [2024-11-26 18:29:17.995223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:24.884 [2024-11-26 18:29:17.995242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:24.884 [2024-11-26 18:29:17.995252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:24.884 [2024-11-26 18:29:17.995270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:24.884 [2024-11-26 18:29:17.995279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:24.884 [2024-11-26 18:29:17.995289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:24.884 [2024-11-26 18:29:17.995297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:24.884 [2024-11-26 18:29:17.995307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:24.884 [2024-11-26 18:29:17.995314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:24.884 [2024-11-26 18:29:17.995337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:24.884 [2024-11-26 18:29:17.995346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:24.884 [2024-11-26 18:29:17.995364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:24.884 [2024-11-26 18:29:17.995381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:24.884 [2024-11-26 18:29:17.995389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:24.884 [2024-11-26 18:29:17.995408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:24.884 [2024-11-26 18:29:17.995418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:24.884 [2024-11-26 18:29:17.995435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:24.884 [2024-11-26 18:29:17.995443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:24.884 [2024-11-26 18:29:17.995460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:24.884 [2024-11-26 18:29:17.995472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:24.884 [2024-11-26 18:29:17.995511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:24.884 [2024-11-26 18:29:17.995519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:24.884 [2024-11-26 18:29:17.995531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:24.884 [2024-11-26 18:29:17.995539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:24.884 [2024-11-26 18:29:17.995549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:24.884 [2024-11-26 18:29:17.995556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:24.884 [2024-11-26 18:29:17.995574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:24.884 [2024-11-26 18:29:17.995585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.884 [2024-11-26 18:29:17.995607] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:24.884 [2024-11-26 18:29:17.995618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:24.884 [2024-11-26 18:29:17.995637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:24.884 [2024-11-26 18:29:17.995648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.885 [2024-11-26 18:29:17.995657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:24.885 [2024-11-26 18:29:17.995668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:24.885 [2024-11-26 18:29:17.995675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:24.885 [2024-11-26 18:29:17.995686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:24.885 [2024-11-26 18:29:17.995693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:24.885 [2024-11-26 18:29:17.995702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:24.885 [2024-11-26 18:29:17.995715] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:24.885 [2024-11-26 18:29:17.995727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.885 [2024-11-26 18:29:17.995737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:24.885 [2024-11-26 18:29:17.995749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:24.885 [2024-11-26 18:29:17.995757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:24.885 [2024-11-26 18:29:17.995768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:24.885 [2024-11-26 18:29:17.995777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:24.885 [2024-11-26 18:29:17.995787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:24.885 [2024-11-26 18:29:17.995794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:24.885 [2024-11-26 18:29:17.995804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:24.885 [2024-11-26 18:29:17.995813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:24.885 [2024-11-26 18:29:17.995826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:24.885 [2024-11-26 18:29:17.995835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:24.885 [2024-11-26 18:29:17.995852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:24.885 [2024-11-26 18:29:17.995860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:24.885 [2024-11-26 18:29:17.995872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:24.885 [2024-11-26 18:29:17.995880] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:24.885 [2024-11-26 18:29:17.995893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.885 [2024-11-26 18:29:17.995902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:24.885 [2024-11-26 18:29:17.995911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:24.885 [2024-11-26 18:29:17.995919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:24.885 [2024-11-26 18:29:17.995930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:24.885 [2024-11-26 18:29:17.995940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.885 [2024-11-26 18:29:17.995950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:24.885 [2024-11-26 18:29:17.995959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:25:24.885 [2024-11-26 18:29:17.995969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.885 [2024-11-26 18:29:17.996042] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:24.885 [2024-11-26 18:29:17.996059] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:29.089 [2024-11-26 18:29:22.002732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.002805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:29.089 [2024-11-26 18:29:22.002822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4014.419 ms 00:25:29.089 [2024-11-26 18:29:22.002834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.044691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.044752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.089 [2024-11-26 18:29:22.044767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.511 ms 00:25:29.089 [2024-11-26 18:29:22.044779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.044948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.044965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:29.089 [2024-11-26 18:29:22.044976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:29.089 [2024-11-26 18:29:22.044989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.107246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.107320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.089 [2024-11-26 18:29:22.107335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.327 ms 00:25:29.089 [2024-11-26 18:29:22.107347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.107399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.107410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.089 [2024-11-26 18:29:22.107420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:29.089 [2024-11-26 18:29:22.107429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.107973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.107997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.089 [2024-11-26 18:29:22.108008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:25:29.089 [2024-11-26 18:29:22.108018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.108141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.108158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.089 [2024-11-26 18:29:22.108167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:29.089 [2024-11-26 18:29:22.108178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.129855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.129907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.089 [2024-11-26 18:29:22.129922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.688 ms 00:25:29.089 [2024-11-26 18:29:22.129934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.144722] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:29.089 [2024-11-26 18:29:22.162364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.162525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:29.089 [2024-11-26 18:29:22.162553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.360 ms 00:25:29.089 [2024-11-26 18:29:22.162563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.249336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.249401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:29.089 [2024-11-26 18:29:22.249424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.856 ms 00:25:29.089 [2024-11-26 18:29:22.249434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.249686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.249701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:29.089 [2024-11-26 18:29:22.249716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:25:29.089 [2024-11-26 18:29:22.249726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.295435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.295596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:29.089 [2024-11-26 18:29:22.295634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.713 ms 00:25:29.089 [2024-11-26 18:29:22.295646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.340890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.341035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:29.089 [2024-11-26 18:29:22.341060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.245 ms 00:25:29.089 [2024-11-26 18:29:22.341069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.089 [2024-11-26 18:29:22.341965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.089 [2024-11-26 18:29:22.341994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:29.089 [2024-11-26 18:29:22.342008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:25:29.089 [2024-11-26 18:29:22.342016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.350 [2024-11-26 18:29:22.455559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.350 [2024-11-26 18:29:22.455632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:29.350 [2024-11-26 18:29:22.455658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.654 ms 00:25:29.350 [2024-11-26 18:29:22.455668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.350 [2024-11-26 18:29:22.499869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.350 [2024-11-26 18:29:22.499928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:29.350 [2024-11-26 18:29:22.499947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.179 ms 00:25:29.350 [2024-11-26 18:29:22.499956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.350 [2024-11-26 18:29:22.545335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.350 [2024-11-26 18:29:22.545395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:29.350 [2024-11-26 18:29:22.545413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.412 ms 00:25:29.350 [2024-11-26 18:29:22.545422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.350 [2024-11-26 18:29:22.587465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.350 [2024-11-26 18:29:22.587514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:29.350 [2024-11-26 18:29:22.587532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.076 ms 00:25:29.350 [2024-11-26 18:29:22.587541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.350 [2024-11-26 18:29:22.587588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.350 [2024-11-26 18:29:22.587599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:29.350 [2024-11-26 18:29:22.587630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:29.350 [2024-11-26 18:29:22.587641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.350 [2024-11-26 18:29:22.587787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.350 [2024-11-26 18:29:22.587800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:29.350 [2024-11-26 18:29:22.587813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:29.350 [2024-11-26 18:29:22.587821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.350 [2024-11-26 18:29:22.589021] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4619.775 ms, result 0 00:25:29.350 { 00:25:29.350 "name": "ftl0", 00:25:29.350 "uuid": "2a8c365d-4e32-4708-81c8-8eee1b90e5b7" 00:25:29.350 } 00:25:29.350 18:29:22 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:25:29.350 18:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:29.350 18:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:29.350 18:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:25:29.350 18:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:29.350 18:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:29.350 18:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:29.610 18:29:22 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:29.869 [ 00:25:29.869 { 00:25:29.869 "name": "ftl0", 00:25:29.869 "aliases": [ 00:25:29.869 "2a8c365d-4e32-4708-81c8-8eee1b90e5b7" 00:25:29.869 ], 00:25:29.870 "product_name": "FTL disk", 00:25:29.870 "block_size": 4096, 00:25:29.870 "num_blocks": 20971520, 00:25:29.870 "uuid": "2a8c365d-4e32-4708-81c8-8eee1b90e5b7", 00:25:29.870 "assigned_rate_limits": { 00:25:29.870 "rw_ios_per_sec": 0, 00:25:29.870 "rw_mbytes_per_sec": 0, 00:25:29.870 "r_mbytes_per_sec": 0, 00:25:29.870 "w_mbytes_per_sec": 0 00:25:29.870 }, 00:25:29.870 "claimed": false, 00:25:29.870 "zoned": false, 00:25:29.870 "supported_io_types": { 00:25:29.870 "read": true, 00:25:29.870 "write": true, 00:25:29.870 "unmap": true, 00:25:29.870 "flush": true, 00:25:29.870 "reset": false, 00:25:29.870 "nvme_admin": false, 00:25:29.870 "nvme_io": false, 00:25:29.870 "nvme_io_md": false, 00:25:29.870 "write_zeroes": true, 00:25:29.870 "zcopy": false, 00:25:29.870 "get_zone_info": false, 00:25:29.870 "zone_management": false, 00:25:29.870 "zone_append": false, 00:25:29.870 "compare": false, 00:25:29.870 "compare_and_write": false, 00:25:29.870 "abort": false, 00:25:29.870 "seek_hole": false, 00:25:29.870 "seek_data": false, 00:25:29.870 "copy": false, 00:25:29.870 "nvme_iov_md": false 00:25:29.870 }, 00:25:29.870 "driver_specific": { 00:25:29.870 "ftl": { 00:25:29.870 "base_bdev": "22d45149-8404-4a33-af06-8ba5aa6e472d", 00:25:29.870 "cache": "nvc0n1p0" 00:25:29.870 } 00:25:29.870 } 00:25:29.870 } 00:25:29.870 ] 00:25:29.870 18:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:25:29.870 18:29:23 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:25:29.870 18:29:23 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:30.130 18:29:23 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:25:30.130 18:29:23 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:30.390 [2024-11-26 18:29:23.528752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.390 [2024-11-26 18:29:23.528904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:30.390 [2024-11-26 18:29:23.528948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:30.390 [2024-11-26 18:29:23.528981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.390 [2024-11-26 18:29:23.529108] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:30.390 [2024-11-26 18:29:23.533536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.390 [2024-11-26 18:29:23.533624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:30.390 [2024-11-26 18:29:23.533664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.368 ms 00:25:30.390 [2024-11-26 18:29:23.533688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.390 [2024-11-26 18:29:23.534636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.390 [2024-11-26 18:29:23.534702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:30.390 [2024-11-26 18:29:23.534742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:25:30.390 [2024-11-26 18:29:23.534765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.390 [2024-11-26 18:29:23.537427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.390 [2024-11-26 18:29:23.537478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:30.390 [2024-11-26 18:29:23.537492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.569 ms 00:25:30.390 [2024-11-26 18:29:23.537500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.390 [2024-11-26 18:29:23.542747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.390 [2024-11-26 18:29:23.542805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:30.390 [2024-11-26 18:29:23.542834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.207 ms 00:25:30.390 [2024-11-26 18:29:23.542854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.390 [2024-11-26 18:29:23.582145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.390 [2024-11-26 18:29:23.582237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:30.390 [2024-11-26 18:29:23.582296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.226 ms 00:25:30.390 [2024-11-26 18:29:23.582319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.390 [2024-11-26 18:29:23.607653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.390 [2024-11-26 18:29:23.607763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:30.390 [2024-11-26 18:29:23.607808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.292 ms 00:25:30.390 [2024-11-26 18:29:23.607833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.390 [2024-11-26 18:29:23.608178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.390 [2024-11-26 18:29:23.608228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:30.390 [2024-11-26 18:29:23.608267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:25:30.390 [2024-11-26 18:29:23.608305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.390 [2024-11-26 18:29:23.647752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.390 [2024-11-26 18:29:23.647845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:30.390 [2024-11-26 18:29:23.647881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.448 ms 00:25:30.390 [2024-11-26 18:29:23.647902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.390 [2024-11-26 18:29:23.685769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.390 [2024-11-26 18:29:23.685877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:30.390 [2024-11-26 18:29:23.685915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.857 ms 00:25:30.390 [2024-11-26 18:29:23.685939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.653 [2024-11-26 18:29:23.728715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.653 [2024-11-26 18:29:23.728853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:30.653 [2024-11-26 18:29:23.728900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.756 ms 00:25:30.653 [2024-11-26 18:29:23.728926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.653 [2024-11-26 18:29:23.772291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.653 [2024-11-26 18:29:23.772459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:30.653 [2024-11-26 18:29:23.772505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.202 ms 00:25:30.653 [2024-11-26 18:29:23.772532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.653 [2024-11-26 18:29:23.772666] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:30.653 [2024-11-26 18:29:23.772753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:30.653 [2024-11-26 18:29:23.772805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:30.653 [2024-11-26 18:29:23.772870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:30.653 [2024-11-26 18:29:23.772918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:30.653 [2024-11-26 18:29:23.772970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:30.653 [2024-11-26 18:29:23.773032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:30.653 [2024-11-26 18:29:23.773098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.773982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.774991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.775000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.775012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.775021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.775034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.775043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.775056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:30.654 [2024-11-26 18:29:23.775065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:30.655 [2024-11-26 18:29:23.775375] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:30.655 [2024-11-26 18:29:23.775386] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a8c365d-4e32-4708-81c8-8eee1b90e5b7 00:25:30.655 [2024-11-26 18:29:23.775396] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:30.655 [2024-11-26 18:29:23.775409] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:30.655 [2024-11-26 18:29:23.775421] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:30.655 [2024-11-26 18:29:23.775433] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:30.655 [2024-11-26 18:29:23.775441] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:30.655 [2024-11-26 18:29:23.775453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:30.655 [2024-11-26 18:29:23.775462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:30.655 [2024-11-26 18:29:23.775478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:30.655 [2024-11-26 18:29:23.775486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:30.655 [2024-11-26 18:29:23.775498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.655 [2024-11-26 18:29:23.775508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:30.655 [2024-11-26 18:29:23.775521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.858 ms 00:25:30.655 [2024-11-26 18:29:23.775530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.655 [2024-11-26 18:29:23.799358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.655 [2024-11-26 18:29:23.799461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:30.655 [2024-11-26 18:29:23.799501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.682 ms 00:25:30.655 [2024-11-26 18:29:23.799527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.655 [2024-11-26 18:29:23.800241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.655 [2024-11-26 18:29:23.800318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:30.655 [2024-11-26 18:29:23.800381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:25:30.655 [2024-11-26 18:29:23.800426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.655 [2024-11-26 18:29:23.870450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.655 [2024-11-26 18:29:23.870574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.655 [2024-11-26 18:29:23.870609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.655 [2024-11-26 18:29:23.870641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.655 [2024-11-26 18:29:23.870757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.655 [2024-11-26 18:29:23.870800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.655 [2024-11-26 18:29:23.870837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.655 [2024-11-26 18:29:23.870858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.655 [2024-11-26 18:29:23.871046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.655 [2024-11-26 18:29:23.871105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.655 [2024-11-26 18:29:23.871142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.655 [2024-11-26 18:29:23.871176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.655 [2024-11-26 18:29:23.871253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.655 [2024-11-26 18:29:23.871284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.655 [2024-11-26 18:29:23.871317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.655 [2024-11-26 18:29:23.871350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.915 [2024-11-26 18:29:24.005486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.915 [2024-11-26 18:29:24.005630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.915 [2024-11-26 18:29:24.005672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.915 [2024-11-26 18:29:24.005694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.915 [2024-11-26 18:29:24.110293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.915 [2024-11-26 18:29:24.110430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.915 [2024-11-26 18:29:24.110466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.915 [2024-11-26 18:29:24.110489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.915 [2024-11-26 18:29:24.110658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.915 [2024-11-26 18:29:24.110760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.915 [2024-11-26 18:29:24.110797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.915 [2024-11-26 18:29:24.110828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.915 [2024-11-26 18:29:24.110950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.915 [2024-11-26 18:29:24.110962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.915 [2024-11-26 18:29:24.110973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.915 [2024-11-26 18:29:24.110981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.915 [2024-11-26 18:29:24.111132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.916 [2024-11-26 18:29:24.111147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.916 [2024-11-26 18:29:24.111160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.916 [2024-11-26 18:29:24.111168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.916 [2024-11-26 18:29:24.111255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.916 [2024-11-26 18:29:24.111267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:30.916 [2024-11-26 18:29:24.111277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.916 [2024-11-26 18:29:24.111285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.916 [2024-11-26 18:29:24.111356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.916 [2024-11-26 18:29:24.111367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.916 [2024-11-26 18:29:24.111377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.916 [2024-11-26 18:29:24.111387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.916 [2024-11-26 18:29:24.111463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.916 [2024-11-26 18:29:24.111474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.916 [2024-11-26 18:29:24.111485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.916 [2024-11-26 18:29:24.111492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.916 [2024-11-26 18:29:24.111768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 584.110 ms, result 0 00:25:30.916 true 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77624 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77624 ']' 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77624 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77624 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77624' 00:25:30.916 killing process with pid 77624 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77624 00:25:30.916 18:29:24 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77624 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:39.072 18:29:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:25:39.331 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:25:39.331 fio-3.35 00:25:39.331 Starting 1 thread 00:25:44.603 00:25:44.603 test: (groupid=0, jobs=1): err= 0: pid=77909: Tue Nov 26 18:29:37 2024 00:25:44.603 read: IOPS=1083, BW=71.9MiB/s (75.4MB/s)(255MiB/3539msec) 00:25:44.603 slat (nsec): min=4333, max=36587, avg=6478.33, stdev=2719.93 00:25:44.603 clat (usec): min=245, max=4061, avg=406.88, stdev=89.40 00:25:44.603 lat (usec): min=251, max=4075, avg=413.36, stdev=89.78 00:25:44.603 clat percentiles (usec): 00:25:44.603 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 338], 00:25:44.603 | 30.00th=[ 371], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 429], 00:25:44.603 | 70.00th=[ 445], 80.00th=[ 457], 90.00th=[ 498], 95.00th=[ 515], 00:25:44.603 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 1020], 99.95th=[ 1287], 00:25:44.603 | 99.99th=[ 4047] 00:25:44.603 write: IOPS=1090, BW=72.4MiB/s (76.0MB/s)(256MiB/3535msec); 0 zone resets 00:25:44.603 slat (usec): min=15, max=115, avg=21.24, stdev= 5.83 00:25:44.603 clat (usec): min=293, max=6867, avg=473.20, stdev=147.48 00:25:44.603 lat (usec): min=321, max=6886, avg=494.44, stdev=147.73 00:25:44.603 clat percentiles (usec): 00:25:44.603 | 1.00th=[ 330], 5.00th=[ 375], 10.00th=[ 392], 20.00th=[ 408], 00:25:44.603 | 30.00th=[ 424], 40.00th=[ 453], 50.00th=[ 465], 60.00th=[ 478], 00:25:44.603 | 70.00th=[ 498], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 586], 00:25:44.603 | 99.00th=[ 742], 99.50th=[ 791], 99.90th=[ 1614], 99.95th=[ 4752], 00:25:44.603 | 99.99th=[ 6849] 00:25:44.603 bw ( KiB/s): min=67864, max=81464, per=100.00%, avg=74236.57, stdev=4710.15, samples=7 00:25:44.603 iops : min= 998, max= 1198, avg=1091.71, stdev=69.27, samples=7 00:25:44.603 lat (usec) : 250=0.03%, 500=80.53%, 750=18.96%, 1000=0.35% 00:25:44.603 lat (msec) : 2=0.09%, 10=0.04% 00:25:44.603 cpu : usr=99.27%, sys=0.11%, ctx=9, majf=0, minf=1169 00:25:44.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:44.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.603 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.603 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:44.603 00:25:44.603 Run status group 0 (all jobs): 00:25:44.603 READ: bw=71.9MiB/s (75.4MB/s), 71.9MiB/s-71.9MiB/s (75.4MB/s-75.4MB/s), io=255MiB (267MB), run=3539-3539msec 00:25:44.603 WRITE: bw=72.4MiB/s (76.0MB/s), 72.4MiB/s-72.4MiB/s (76.0MB/s-76.0MB/s), io=256MiB (269MB), run=3535-3535msec 00:25:45.982 ----------------------------------------------------- 00:25:45.982 Suppressions used: 00:25:45.982 count bytes template 00:25:45.982 1 5 /usr/src/fio/parse.c 00:25:45.982 1 8 libtcmalloc_minimal.so 00:25:45.982 1 904 libcrypto.so 00:25:45.982 ----------------------------------------------------- 00:25:45.982 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.242 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:25:46.243 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:46.243 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.243 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.243 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:25:46.243 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:46.243 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:46.243 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:46.243 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:25:46.243 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:46.243 18:29:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:25:46.502 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:25:46.502 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:25:46.502 fio-3.35 00:25:46.502 Starting 2 threads 00:26:18.595 00:26:18.595 first_half: (groupid=0, jobs=1): err= 0: pid=78013: Tue Nov 26 18:30:06 2024 00:26:18.595 read: IOPS=2570, BW=10.0MiB/s (10.5MB/s)(255MiB/25384msec) 00:26:18.595 slat (nsec): min=3869, max=36995, avg=6640.57, stdev=1715.90 00:26:18.595 clat (usec): min=863, max=304527, avg=37273.63, stdev=20941.85 00:26:18.595 lat (usec): min=872, max=304532, avg=37280.27, stdev=20942.02 00:26:18.595 clat percentiles (msec): 00:26:18.595 | 1.00th=[ 8], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:18.595 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:18.595 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 52], 00:26:18.595 | 99.00th=[ 157], 99.50th=[ 182], 99.90th=[ 222], 99.95th=[ 264], 00:26:18.595 | 99.99th=[ 292] 00:26:18.595 write: IOPS=3289, BW=12.9MiB/s (13.5MB/s)(256MiB/19921msec); 0 zone resets 00:26:18.595 slat (usec): min=4, max=626, avg= 8.97, stdev= 7.33 00:26:18.595 clat (usec): min=411, max=94976, avg=12420.86, stdev=21069.37 00:26:18.595 lat (usec): min=425, max=94984, avg=12429.83, stdev=21069.49 00:26:18.595 clat percentiles (usec): 00:26:18.595 | 1.00th=[ 1090], 5.00th=[ 1450], 10.00th=[ 1680], 20.00th=[ 2008], 00:26:18.595 | 30.00th=[ 3064], 40.00th=[ 4817], 50.00th=[ 6194], 60.00th=[ 7111], 00:26:18.595 | 70.00th=[ 8291], 80.00th=[12256], 90.00th=[16057], 95.00th=[80217], 00:26:18.595 | 99.00th=[87557], 99.50th=[89654], 99.90th=[92799], 99.95th=[92799], 00:26:18.595 | 99.99th=[93848] 00:26:18.595 bw ( KiB/s): min= 6360, max=41336, per=99.85%, avg=23831.27, stdev=11312.83, samples=22 00:26:18.595 iops : min= 1590, max=10334, avg=5957.82, stdev=2828.21, samples=22 00:26:18.595 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.22% 00:26:18.595 lat (msec) : 2=9.75%, 4=8.12%, 10=20.15%, 20=8.18%, 50=46.84% 00:26:18.595 lat (msec) : 100=5.38%, 250=1.27%, 500=0.04% 00:26:18.595 cpu : usr=99.28%, sys=0.11%, ctx=74, majf=0, minf=5569 00:26:18.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:18.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.595 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:18.595 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:18.595 second_half: (groupid=0, jobs=1): err= 0: pid=78014: Tue Nov 26 18:30:06 2024 00:26:18.595 read: IOPS=2553, BW=9.97MiB/s (10.5MB/s)(255MiB/25558msec) 00:26:18.595 slat (usec): min=3, max=106, avg= 6.52, stdev= 1.74 00:26:18.595 clat (usec): min=1132, max=312567, avg=36707.03, stdev=21153.71 00:26:18.595 lat (usec): min=1141, max=312573, avg=36713.55, stdev=21153.98 00:26:18.595 clat percentiles (msec): 00:26:18.595 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:26:18.595 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:26:18.595 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 51], 00:26:18.595 | 99.00th=[ 163], 99.50th=[ 180], 99.90th=[ 228], 99.95th=[ 284], 00:26:18.595 | 99.99th=[ 309] 00:26:18.595 write: IOPS=2983, BW=11.7MiB/s (12.2MB/s)(256MiB/21967msec); 0 zone resets 00:26:18.595 slat (usec): min=4, max=540, avg= 8.86, stdev= 5.59 00:26:18.595 clat (usec): min=376, max=95115, avg=13344.33, stdev=21808.42 00:26:18.595 lat (usec): min=388, max=95121, avg=13353.18, stdev=21808.54 00:26:18.595 clat percentiles (usec): 00:26:18.595 | 1.00th=[ 1004], 5.00th=[ 1319], 10.00th=[ 1565], 20.00th=[ 1860], 00:26:18.595 | 30.00th=[ 2311], 40.00th=[ 4752], 50.00th=[ 6128], 60.00th=[ 7308], 00:26:18.595 | 70.00th=[ 9241], 80.00th=[13304], 90.00th=[35914], 95.00th=[81265], 00:26:18.595 | 99.00th=[89654], 99.50th=[90702], 99.90th=[93848], 99.95th=[93848], 00:26:18.596 | 99.99th=[94897] 00:26:18.596 bw ( KiB/s): min= 928, max=37752, per=84.50%, avg=20167.58, stdev=9002.38, samples=26 00:26:18.596 iops : min= 232, max= 9438, avg=5041.88, stdev=2250.58, samples=26 00:26:18.596 lat (usec) : 500=0.01%, 750=0.09%, 1000=0.39% 00:26:18.596 lat (msec) : 2=12.00%, 4=6.14%, 10=18.51%, 20=8.65%, 50=47.51% 00:26:18.596 lat (msec) : 100=5.44%, 250=1.23%, 500=0.03% 00:26:18.596 cpu : usr=99.23%, sys=0.12%, ctx=53, majf=0, minf=5548 00:26:18.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:18.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:18.596 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:18.596 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:18.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:18.596 00:26:18.596 Run status group 0 (all jobs): 00:26:18.596 READ: bw=19.9MiB/s (20.9MB/s), 9.97MiB/s-10.0MiB/s (10.5MB/s-10.5MB/s), io=510MiB (534MB), run=25384-25558msec 00:26:18.596 WRITE: bw=23.3MiB/s (24.4MB/s), 11.7MiB/s-12.9MiB/s (12.2MB/s-13.5MB/s), io=512MiB (537MB), run=19921-21967msec 00:26:18.596 ----------------------------------------------------- 00:26:18.596 Suppressions used: 00:26:18.596 count bytes template 00:26:18.596 2 10 /usr/src/fio/parse.c 00:26:18.596 2 192 /usr/src/fio/iolog.c 00:26:18.596 1 8 libtcmalloc_minimal.so 00:26:18.596 1 904 libcrypto.so 00:26:18.596 ----------------------------------------------------- 00:26:18.596 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:18.596 18:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:18.596 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:18.596 fio-3.35 00:26:18.596 Starting 1 thread 00:26:33.474 00:26:33.474 test: (groupid=0, jobs=1): err= 0: pid=78354: Tue Nov 26 18:30:25 2024 00:26:33.474 read: IOPS=7536, BW=29.4MiB/s (30.9MB/s)(255MiB/8652msec) 00:26:33.474 slat (usec): min=4, max=126, avg= 6.57, stdev= 2.34 00:26:33.474 clat (usec): min=764, max=33119, avg=16973.04, stdev=1153.80 00:26:33.474 lat (usec): min=770, max=33138, avg=16979.61, stdev=1154.51 00:26:33.474 clat percentiles (usec): 00:26:33.474 | 1.00th=[16057], 5.00th=[16188], 10.00th=[16319], 20.00th=[16450], 00:26:33.474 | 30.00th=[16581], 40.00th=[16712], 50.00th=[16909], 60.00th=[16909], 00:26:33.474 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[19268], 00:26:33.474 | 99.00th=[20841], 99.50th=[21365], 99.90th=[30278], 99.95th=[31327], 00:26:33.474 | 99.99th=[32637] 00:26:33.474 write: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(256MiB/5536msec); 0 zone resets 00:26:33.474 slat (usec): min=5, max=428, avg=10.27, stdev= 5.88 00:26:33.474 clat (usec): min=651, max=59017, avg=10756.64, stdev=12632.85 00:26:33.474 lat (usec): min=658, max=59026, avg=10766.92, stdev=12632.81 00:26:33.474 clat percentiles (usec): 00:26:33.474 | 1.00th=[ 947], 5.00th=[ 1123], 10.00th=[ 1287], 20.00th=[ 1532], 00:26:33.474 | 30.00th=[ 1762], 40.00th=[ 2278], 50.00th=[ 7570], 60.00th=[ 9110], 00:26:33.474 | 70.00th=[10552], 80.00th=[12649], 90.00th=[35914], 95.00th=[38536], 00:26:33.474 | 99.00th=[50070], 99.50th=[53740], 99.90th=[56886], 99.95th=[57410], 00:26:33.474 | 99.99th=[58459] 00:26:33.474 bw ( KiB/s): min= 1984, max=62016, per=92.27%, avg=43690.67, stdev=14253.52, samples=12 00:26:33.474 iops : min= 496, max=15504, avg=10922.67, stdev=3563.38, samples=12 00:26:33.474 lat (usec) : 750=0.02%, 1000=0.93% 00:26:33.474 lat (msec) : 2=17.69%, 4=2.34%, 10=12.51%, 20=57.05%, 50=8.95% 00:26:33.474 lat (msec) : 100=0.51% 00:26:33.474 cpu : usr=98.78%, sys=0.30%, ctx=179, majf=0, minf=5565 00:26:33.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:33.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.474 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:33.474 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:33.474 00:26:33.474 Run status group 0 (all jobs): 00:26:33.474 READ: bw=29.4MiB/s (30.9MB/s), 29.4MiB/s-29.4MiB/s (30.9MB/s-30.9MB/s), io=255MiB (267MB), run=8652-8652msec 00:26:33.474 WRITE: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=256MiB (268MB), run=5536-5536msec 00:26:34.850 ----------------------------------------------------- 00:26:34.850 Suppressions used: 00:26:34.850 count bytes template 00:26:34.850 1 5 /usr/src/fio/parse.c 00:26:34.850 2 192 /usr/src/fio/iolog.c 00:26:34.850 1 8 libtcmalloc_minimal.so 00:26:34.850 1 904 libcrypto.so 00:26:34.850 ----------------------------------------------------- 00:26:34.850 00:26:34.850 18:30:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:26:34.850 18:30:27 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:34.850 18:30:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:34.850 18:30:28 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:34.850 18:30:28 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:26:34.850 Remove shared memory files 00:26:34.850 18:30:28 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:34.850 18:30:28 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:26:34.850 18:30:28 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:26:34.850 18:30:28 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58059 /dev/shm/spdk_tgt_trace.pid76506 00:26:34.850 18:30:28 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:34.850 18:30:28 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:26:34.850 ************************************ 00:26:34.850 END TEST ftl_fio_basic 00:26:34.850 ************************************ 00:26:34.851 00:26:34.851 real 1m15.092s 00:26:34.851 user 2m46.178s 00:26:34.851 sys 0m3.860s 00:26:34.851 18:30:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:34.851 18:30:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:34.851 18:30:28 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:26:34.851 18:30:28 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:34.851 18:30:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:34.851 18:30:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:34.851 ************************************ 00:26:34.851 START TEST ftl_bdevperf 00:26:34.851 ************************************ 00:26:34.851 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:26:35.110 * Looking for test storage... 00:26:35.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:35.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.110 --rc genhtml_branch_coverage=1 00:26:35.110 --rc genhtml_function_coverage=1 00:26:35.110 --rc genhtml_legend=1 00:26:35.110 --rc geninfo_all_blocks=1 00:26:35.110 --rc geninfo_unexecuted_blocks=1 00:26:35.110 00:26:35.110 ' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:35.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.110 --rc genhtml_branch_coverage=1 00:26:35.110 --rc genhtml_function_coverage=1 00:26:35.110 --rc genhtml_legend=1 00:26:35.110 --rc geninfo_all_blocks=1 00:26:35.110 --rc geninfo_unexecuted_blocks=1 00:26:35.110 00:26:35.110 ' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:35.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.110 --rc genhtml_branch_coverage=1 00:26:35.110 --rc genhtml_function_coverage=1 00:26:35.110 --rc genhtml_legend=1 00:26:35.110 --rc geninfo_all_blocks=1 00:26:35.110 --rc geninfo_unexecuted_blocks=1 00:26:35.110 00:26:35.110 ' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:35.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.110 --rc genhtml_branch_coverage=1 00:26:35.110 --rc genhtml_function_coverage=1 00:26:35.110 --rc genhtml_legend=1 00:26:35.110 --rc geninfo_all_blocks=1 00:26:35.110 --rc geninfo_unexecuted_blocks=1 00:26:35.110 00:26:35.110 ' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78598 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78598 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78598 ']' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.110 18:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.368 [2024-11-26 18:30:28.470256] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:26:35.368 [2024-11-26 18:30:28.470433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78598 ] 00:26:35.369 [2024-11-26 18:30:28.640569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.626 [2024-11-26 18:30:28.785961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.191 18:30:29 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.191 18:30:29 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:36.191 18:30:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:36.191 18:30:29 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:26:36.191 18:30:29 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:36.191 18:30:29 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:26:36.191 18:30:29 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:26:36.191 18:30:29 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:36.449 18:30:29 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:36.449 18:30:29 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:26:36.449 18:30:29 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:36.449 18:30:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:36.449 18:30:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:36.449 18:30:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:36.449 18:30:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:36.449 18:30:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:36.707 18:30:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:36.707 { 00:26:36.707 "name": "nvme0n1", 00:26:36.707 "aliases": [ 00:26:36.707 "4c163bd6-d0ac-4037-b859-275df9ed17a6" 00:26:36.707 ], 00:26:36.707 "product_name": "NVMe disk", 00:26:36.707 "block_size": 4096, 00:26:36.707 "num_blocks": 1310720, 00:26:36.707 "uuid": "4c163bd6-d0ac-4037-b859-275df9ed17a6", 00:26:36.707 "numa_id": -1, 00:26:36.707 "assigned_rate_limits": { 00:26:36.707 "rw_ios_per_sec": 0, 00:26:36.707 "rw_mbytes_per_sec": 0, 00:26:36.707 "r_mbytes_per_sec": 0, 00:26:36.707 "w_mbytes_per_sec": 0 00:26:36.707 }, 00:26:36.707 "claimed": true, 00:26:36.707 "claim_type": "read_many_write_one", 00:26:36.707 "zoned": false, 00:26:36.707 "supported_io_types": { 00:26:36.707 "read": true, 00:26:36.707 "write": true, 00:26:36.707 "unmap": true, 00:26:36.707 "flush": true, 00:26:36.707 "reset": true, 00:26:36.707 "nvme_admin": true, 00:26:36.707 "nvme_io": true, 00:26:36.707 "nvme_io_md": false, 00:26:36.707 "write_zeroes": true, 00:26:36.707 "zcopy": false, 00:26:36.707 "get_zone_info": false, 00:26:36.707 "zone_management": false, 00:26:36.707 "zone_append": false, 00:26:36.707 "compare": true, 00:26:36.707 "compare_and_write": false, 00:26:36.707 "abort": true, 00:26:36.707 "seek_hole": false, 00:26:36.707 "seek_data": false, 00:26:36.707 "copy": true, 00:26:36.707 "nvme_iov_md": false 00:26:36.707 }, 00:26:36.707 "driver_specific": { 00:26:36.707 "nvme": [ 00:26:36.707 { 00:26:36.707 "pci_address": "0000:00:11.0", 00:26:36.707 "trid": { 00:26:36.707 "trtype": "PCIe", 00:26:36.707 "traddr": "0000:00:11.0" 00:26:36.707 }, 00:26:36.707 "ctrlr_data": { 00:26:36.707 "cntlid": 0, 00:26:36.707 "vendor_id": "0x1b36", 00:26:36.707 "model_number": "QEMU NVMe Ctrl", 00:26:36.707 "serial_number": "12341", 00:26:36.707 "firmware_revision": "8.0.0", 00:26:36.707 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:36.707 "oacs": { 00:26:36.707 "security": 0, 00:26:36.707 "format": 1, 00:26:36.707 "firmware": 0, 00:26:36.707 "ns_manage": 1 00:26:36.707 }, 00:26:36.707 "multi_ctrlr": false, 00:26:36.707 "ana_reporting": false 00:26:36.707 }, 00:26:36.707 "vs": { 00:26:36.707 "nvme_version": "1.4" 00:26:36.707 }, 00:26:36.707 "ns_data": { 00:26:36.707 "id": 1, 00:26:36.707 "can_share": false 00:26:36.707 } 00:26:36.707 } 00:26:36.707 ], 00:26:36.707 "mp_policy": "active_passive" 00:26:36.707 } 00:26:36.707 } 00:26:36.707 ]' 00:26:36.707 18:30:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:36.707 18:30:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:36.707 18:30:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:36.964 18:30:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:36.964 18:30:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:36.964 18:30:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:26:36.964 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:26:36.964 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:36.964 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:26:36.965 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:36.965 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:37.223 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=fc7baa71-15f9-4cdd-9520-fc5feea02b38 00:26:37.223 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:26:37.223 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc7baa71-15f9-4cdd-9520-fc5feea02b38 00:26:37.481 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:37.790 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=d79076db-6bc2-4ad5-bd7b-65dc1f2c7ea8 00:26:37.790 18:30:30 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d79076db-6bc2-4ad5-bd7b-65dc1f2c7ea8 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=e443b776-ef79-4442-94b2-aec891a06d66 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e443b776-ef79-4442-94b2-aec891a06d66 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=e443b776-ef79-4442-94b2-aec891a06d66 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size e443b776-ef79-4442-94b2-aec891a06d66 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e443b776-ef79-4442-94b2-aec891a06d66 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e443b776-ef79-4442-94b2-aec891a06d66 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:38.063 { 00:26:38.063 "name": "e443b776-ef79-4442-94b2-aec891a06d66", 00:26:38.063 "aliases": [ 00:26:38.063 "lvs/nvme0n1p0" 00:26:38.063 ], 00:26:38.063 "product_name": "Logical Volume", 00:26:38.063 "block_size": 4096, 00:26:38.063 "num_blocks": 26476544, 00:26:38.063 "uuid": "e443b776-ef79-4442-94b2-aec891a06d66", 00:26:38.063 "assigned_rate_limits": { 00:26:38.063 "rw_ios_per_sec": 0, 00:26:38.063 "rw_mbytes_per_sec": 0, 00:26:38.063 "r_mbytes_per_sec": 0, 00:26:38.063 "w_mbytes_per_sec": 0 00:26:38.063 }, 00:26:38.063 "claimed": false, 00:26:38.063 "zoned": false, 00:26:38.063 "supported_io_types": { 00:26:38.063 "read": true, 00:26:38.063 "write": true, 00:26:38.063 "unmap": true, 00:26:38.063 "flush": false, 00:26:38.063 "reset": true, 00:26:38.063 "nvme_admin": false, 00:26:38.063 "nvme_io": false, 00:26:38.063 "nvme_io_md": false, 00:26:38.063 "write_zeroes": true, 00:26:38.063 "zcopy": false, 00:26:38.063 "get_zone_info": false, 00:26:38.063 "zone_management": false, 00:26:38.063 "zone_append": false, 00:26:38.063 "compare": false, 00:26:38.063 "compare_and_write": false, 00:26:38.063 "abort": false, 00:26:38.063 "seek_hole": true, 00:26:38.063 "seek_data": true, 00:26:38.063 "copy": false, 00:26:38.063 "nvme_iov_md": false 00:26:38.063 }, 00:26:38.063 "driver_specific": { 00:26:38.063 "lvol": { 00:26:38.063 "lvol_store_uuid": "d79076db-6bc2-4ad5-bd7b-65dc1f2c7ea8", 00:26:38.063 "base_bdev": "nvme0n1", 00:26:38.063 "thin_provision": true, 00:26:38.063 "num_allocated_clusters": 0, 00:26:38.063 "snapshot": false, 00:26:38.063 "clone": false, 00:26:38.063 "esnap_clone": false 00:26:38.063 } 00:26:38.063 } 00:26:38.063 } 00:26:38.063 ]' 00:26:38.063 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:38.426 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:38.426 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:38.426 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:38.426 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:38.426 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:38.426 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:26:38.426 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:26:38.426 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:38.682 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:38.682 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:38.682 18:30:31 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size e443b776-ef79-4442-94b2-aec891a06d66 00:26:38.683 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e443b776-ef79-4442-94b2-aec891a06d66 00:26:38.683 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:38.683 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:38.683 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:38.683 18:30:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e443b776-ef79-4442-94b2-aec891a06d66 00:26:38.941 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:38.941 { 00:26:38.941 "name": "e443b776-ef79-4442-94b2-aec891a06d66", 00:26:38.941 "aliases": [ 00:26:38.941 "lvs/nvme0n1p0" 00:26:38.941 ], 00:26:38.941 "product_name": "Logical Volume", 00:26:38.941 "block_size": 4096, 00:26:38.941 "num_blocks": 26476544, 00:26:38.941 "uuid": "e443b776-ef79-4442-94b2-aec891a06d66", 00:26:38.941 "assigned_rate_limits": { 00:26:38.941 "rw_ios_per_sec": 0, 00:26:38.941 "rw_mbytes_per_sec": 0, 00:26:38.941 "r_mbytes_per_sec": 0, 00:26:38.941 "w_mbytes_per_sec": 0 00:26:38.941 }, 00:26:38.941 "claimed": false, 00:26:38.941 "zoned": false, 00:26:38.941 "supported_io_types": { 00:26:38.941 "read": true, 00:26:38.941 "write": true, 00:26:38.941 "unmap": true, 00:26:38.941 "flush": false, 00:26:38.941 "reset": true, 00:26:38.941 "nvme_admin": false, 00:26:38.941 "nvme_io": false, 00:26:38.941 "nvme_io_md": false, 00:26:38.941 "write_zeroes": true, 00:26:38.941 "zcopy": false, 00:26:38.941 "get_zone_info": false, 00:26:38.941 "zone_management": false, 00:26:38.941 "zone_append": false, 00:26:38.941 "compare": false, 00:26:38.941 "compare_and_write": false, 00:26:38.941 "abort": false, 00:26:38.941 "seek_hole": true, 00:26:38.941 "seek_data": true, 00:26:38.941 "copy": false, 00:26:38.941 "nvme_iov_md": false 00:26:38.941 }, 00:26:38.941 "driver_specific": { 00:26:38.941 "lvol": { 00:26:38.941 "lvol_store_uuid": "d79076db-6bc2-4ad5-bd7b-65dc1f2c7ea8", 00:26:38.941 "base_bdev": "nvme0n1", 00:26:38.941 "thin_provision": true, 00:26:38.941 "num_allocated_clusters": 0, 00:26:38.941 "snapshot": false, 00:26:38.941 "clone": false, 00:26:38.941 "esnap_clone": false 00:26:38.941 } 00:26:38.941 } 00:26:38.941 } 00:26:38.941 ]' 00:26:38.941 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:38.941 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:38.941 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:38.941 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:38.941 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:38.941 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:38.941 18:30:32 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:26:38.941 18:30:32 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:39.198 18:30:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:26:39.198 18:30:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size e443b776-ef79-4442-94b2-aec891a06d66 00:26:39.198 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e443b776-ef79-4442-94b2-aec891a06d66 00:26:39.198 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:39.198 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:26:39.198 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:26:39.198 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e443b776-ef79-4442-94b2-aec891a06d66 00:26:39.457 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:39.457 { 00:26:39.457 "name": "e443b776-ef79-4442-94b2-aec891a06d66", 00:26:39.457 "aliases": [ 00:26:39.457 "lvs/nvme0n1p0" 00:26:39.457 ], 00:26:39.457 "product_name": "Logical Volume", 00:26:39.457 "block_size": 4096, 00:26:39.457 "num_blocks": 26476544, 00:26:39.457 "uuid": "e443b776-ef79-4442-94b2-aec891a06d66", 00:26:39.457 "assigned_rate_limits": { 00:26:39.457 "rw_ios_per_sec": 0, 00:26:39.457 "rw_mbytes_per_sec": 0, 00:26:39.457 "r_mbytes_per_sec": 0, 00:26:39.457 "w_mbytes_per_sec": 0 00:26:39.457 }, 00:26:39.457 "claimed": false, 00:26:39.457 "zoned": false, 00:26:39.457 "supported_io_types": { 00:26:39.457 "read": true, 00:26:39.457 "write": true, 00:26:39.457 "unmap": true, 00:26:39.457 "flush": false, 00:26:39.457 "reset": true, 00:26:39.457 "nvme_admin": false, 00:26:39.457 "nvme_io": false, 00:26:39.457 "nvme_io_md": false, 00:26:39.457 "write_zeroes": true, 00:26:39.457 "zcopy": false, 00:26:39.457 "get_zone_info": false, 00:26:39.457 "zone_management": false, 00:26:39.457 "zone_append": false, 00:26:39.457 "compare": false, 00:26:39.457 "compare_and_write": false, 00:26:39.457 "abort": false, 00:26:39.457 "seek_hole": true, 00:26:39.457 "seek_data": true, 00:26:39.457 "copy": false, 00:26:39.457 "nvme_iov_md": false 00:26:39.457 }, 00:26:39.457 "driver_specific": { 00:26:39.457 "lvol": { 00:26:39.457 "lvol_store_uuid": "d79076db-6bc2-4ad5-bd7b-65dc1f2c7ea8", 00:26:39.457 "base_bdev": "nvme0n1", 00:26:39.457 "thin_provision": true, 00:26:39.457 "num_allocated_clusters": 0, 00:26:39.457 "snapshot": false, 00:26:39.457 "clone": false, 00:26:39.457 "esnap_clone": false 00:26:39.457 } 00:26:39.457 } 00:26:39.457 } 00:26:39.457 ]' 00:26:39.457 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:39.457 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:26:39.457 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:39.457 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:39.457 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:39.457 18:30:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:26:39.457 18:30:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:26:39.457 18:30:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e443b776-ef79-4442-94b2-aec891a06d66 -c nvc0n1p0 --l2p_dram_limit 20 00:26:39.716 [2024-11-26 18:30:32.941076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.941157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:39.716 [2024-11-26 18:30:32.941177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:39.716 [2024-11-26 18:30:32.941189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.716 [2024-11-26 18:30:32.941289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.941305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:39.716 [2024-11-26 18:30:32.941316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:39.716 [2024-11-26 18:30:32.941328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.716 [2024-11-26 18:30:32.941360] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:39.716 [2024-11-26 18:30:32.942659] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:39.716 [2024-11-26 18:30:32.942692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.942704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:39.716 [2024-11-26 18:30:32.942721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:26:39.716 [2024-11-26 18:30:32.942732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.716 [2024-11-26 18:30:32.942820] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b1e00597-817c-4a37-b5b0-ed08a56b036d 00:26:39.716 [2024-11-26 18:30:32.944416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.944450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:39.716 [2024-11-26 18:30:32.944471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:39.716 [2024-11-26 18:30:32.944480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.716 [2024-11-26 18:30:32.952252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.952298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:39.716 [2024-11-26 18:30:32.952314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.733 ms 00:26:39.716 [2024-11-26 18:30:32.952328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.716 [2024-11-26 18:30:32.952447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.952468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:39.716 [2024-11-26 18:30:32.952486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:26:39.716 [2024-11-26 18:30:32.952496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.716 [2024-11-26 18:30:32.952578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.952589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:39.716 [2024-11-26 18:30:32.952601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:39.716 [2024-11-26 18:30:32.952610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.716 [2024-11-26 18:30:32.952683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:39.716 [2024-11-26 18:30:32.958754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.958904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:39.716 [2024-11-26 18:30:32.958924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.104 ms 00:26:39.716 [2024-11-26 18:30:32.958941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.716 [2024-11-26 18:30:32.958987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.959000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:39.716 [2024-11-26 18:30:32.959011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:39.716 [2024-11-26 18:30:32.959022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.716 [2024-11-26 18:30:32.959084] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:39.716 [2024-11-26 18:30:32.959260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:39.716 [2024-11-26 18:30:32.959280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:39.716 [2024-11-26 18:30:32.959295] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:39.716 [2024-11-26 18:30:32.959307] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:39.716 [2024-11-26 18:30:32.959320] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:39.716 [2024-11-26 18:30:32.959330] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:39.716 [2024-11-26 18:30:32.959341] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:39.716 [2024-11-26 18:30:32.959350] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:39.716 [2024-11-26 18:30:32.959361] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:39.716 [2024-11-26 18:30:32.959374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.959385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:39.716 [2024-11-26 18:30:32.959395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:26:39.716 [2024-11-26 18:30:32.959407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.716 [2024-11-26 18:30:32.959493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.716 [2024-11-26 18:30:32.959508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:39.717 [2024-11-26 18:30:32.959517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:39.717 [2024-11-26 18:30:32.959530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.717 [2024-11-26 18:30:32.959644] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:39.717 [2024-11-26 18:30:32.959664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:39.717 [2024-11-26 18:30:32.959674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:39.717 [2024-11-26 18:30:32.959685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.717 [2024-11-26 18:30:32.959695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:39.717 [2024-11-26 18:30:32.959705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:39.717 [2024-11-26 18:30:32.959715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:39.717 [2024-11-26 18:30:32.959725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:39.717 [2024-11-26 18:30:32.959733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:39.717 [2024-11-26 18:30:32.959744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:39.717 [2024-11-26 18:30:32.959752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:39.717 [2024-11-26 18:30:32.959782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:39.717 [2024-11-26 18:30:32.959792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:39.717 [2024-11-26 18:30:32.959802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:39.717 [2024-11-26 18:30:32.959811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:39.717 [2024-11-26 18:30:32.959825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.717 [2024-11-26 18:30:32.959833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:39.717 [2024-11-26 18:30:32.959843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:39.717 [2024-11-26 18:30:32.959852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.717 [2024-11-26 18:30:32.959864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:39.717 [2024-11-26 18:30:32.959872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:39.717 [2024-11-26 18:30:32.959882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.717 [2024-11-26 18:30:32.959890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:39.717 [2024-11-26 18:30:32.959900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:39.717 [2024-11-26 18:30:32.959908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.717 [2024-11-26 18:30:32.959918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:39.717 [2024-11-26 18:30:32.959927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:39.717 [2024-11-26 18:30:32.959936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.717 [2024-11-26 18:30:32.959945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:39.717 [2024-11-26 18:30:32.959955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:39.717 [2024-11-26 18:30:32.959963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:39.717 [2024-11-26 18:30:32.959975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:39.717 [2024-11-26 18:30:32.959983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:39.717 [2024-11-26 18:30:32.959993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:39.717 [2024-11-26 18:30:32.960001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:39.717 [2024-11-26 18:30:32.960012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:39.717 [2024-11-26 18:30:32.960019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:39.717 [2024-11-26 18:30:32.960029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:39.717 [2024-11-26 18:30:32.960041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:39.717 [2024-11-26 18:30:32.960051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.717 [2024-11-26 18:30:32.960059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:39.717 [2024-11-26 18:30:32.960070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:39.717 [2024-11-26 18:30:32.960078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.717 [2024-11-26 18:30:32.960087] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:39.717 [2024-11-26 18:30:32.960098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:39.717 [2024-11-26 18:30:32.960110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:39.717 [2024-11-26 18:30:32.960119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:39.717 [2024-11-26 18:30:32.960134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:39.717 [2024-11-26 18:30:32.960143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:39.717 [2024-11-26 18:30:32.960154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:39.717 [2024-11-26 18:30:32.960162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:39.717 [2024-11-26 18:30:32.960172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:39.717 [2024-11-26 18:30:32.960181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:39.717 [2024-11-26 18:30:32.960197] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:39.717 [2024-11-26 18:30:32.960208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.717 [2024-11-26 18:30:32.960220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:39.717 [2024-11-26 18:30:32.960229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:39.717 [2024-11-26 18:30:32.960239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:39.717 [2024-11-26 18:30:32.960248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:39.717 [2024-11-26 18:30:32.960260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:39.717 [2024-11-26 18:30:32.960269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:39.717 [2024-11-26 18:30:32.960280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:39.717 [2024-11-26 18:30:32.960288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:39.717 [2024-11-26 18:30:32.960302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:39.717 [2024-11-26 18:30:32.960310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:39.717 [2024-11-26 18:30:32.960321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:39.717 [2024-11-26 18:30:32.960330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:39.717 [2024-11-26 18:30:32.960340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:39.717 [2024-11-26 18:30:32.960350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:39.717 [2024-11-26 18:30:32.960360] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:39.717 [2024-11-26 18:30:32.960372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:39.717 [2024-11-26 18:30:32.960387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:39.717 [2024-11-26 18:30:32.960396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:39.717 [2024-11-26 18:30:32.960408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:39.717 [2024-11-26 18:30:32.960416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:39.717 [2024-11-26 18:30:32.960429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:39.717 [2024-11-26 18:30:32.960439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:39.717 [2024-11-26 18:30:32.960451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:26:39.717 [2024-11-26 18:30:32.960460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:39.717 [2024-11-26 18:30:32.960507] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:39.717 [2024-11-26 18:30:32.960518] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:43.003 [2024-11-26 18:30:35.805887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:35.805980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:43.003 [2024-11-26 18:30:35.806001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2850.854 ms 00:26:43.003 [2024-11-26 18:30:35.806011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:35.852716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:35.852886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:43.003 [2024-11-26 18:30:35.852911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.427 ms 00:26:43.003 [2024-11-26 18:30:35.852922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:35.853117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:35.853130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:43.003 [2024-11-26 18:30:35.853146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:43.003 [2024-11-26 18:30:35.853156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:35.912118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:35.912281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:43.003 [2024-11-26 18:30:35.912306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.032 ms 00:26:43.003 [2024-11-26 18:30:35.912315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:35.912378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:35.912388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:43.003 [2024-11-26 18:30:35.912401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:43.003 [2024-11-26 18:30:35.912427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:35.913013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:35.913031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:43.003 [2024-11-26 18:30:35.913045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:26:43.003 [2024-11-26 18:30:35.913054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:35.913185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:35.913207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:43.003 [2024-11-26 18:30:35.913222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:26:43.003 [2024-11-26 18:30:35.913232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:35.933235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:35.933298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:43.003 [2024-11-26 18:30:35.933316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.012 ms 00:26:43.003 [2024-11-26 18:30:35.933341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:35.949443] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:26:43.003 [2024-11-26 18:30:35.956076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:35.956151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:43.003 [2024-11-26 18:30:35.956167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.654 ms 00:26:43.003 [2024-11-26 18:30:35.956178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:36.038442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:36.038639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:43.003 [2024-11-26 18:30:36.038662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.364 ms 00:26:43.003 [2024-11-26 18:30:36.038674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:36.038912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:36.038933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:43.003 [2024-11-26 18:30:36.038944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:26:43.003 [2024-11-26 18:30:36.038966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:36.083863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:36.084049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:43.003 [2024-11-26 18:30:36.084070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.895 ms 00:26:43.003 [2024-11-26 18:30:36.084081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:36.128139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:36.128233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:43.003 [2024-11-26 18:30:36.128251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.071 ms 00:26:43.003 [2024-11-26 18:30:36.128261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.003 [2024-11-26 18:30:36.129101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.003 [2024-11-26 18:30:36.129139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:43.003 [2024-11-26 18:30:36.129152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:26:43.003 [2024-11-26 18:30:36.129164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.004 [2024-11-26 18:30:36.247631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.004 [2024-11-26 18:30:36.247743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:43.004 [2024-11-26 18:30:36.247762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.580 ms 00:26:43.004 [2024-11-26 18:30:36.247775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.004 [2024-11-26 18:30:36.294055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.004 [2024-11-26 18:30:36.294255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:43.004 [2024-11-26 18:30:36.294281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.195 ms 00:26:43.004 [2024-11-26 18:30:36.294294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.262 [2024-11-26 18:30:36.340279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.262 [2024-11-26 18:30:36.340369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:43.262 [2024-11-26 18:30:36.340385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.974 ms 00:26:43.262 [2024-11-26 18:30:36.340396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.262 [2024-11-26 18:30:36.385693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.262 [2024-11-26 18:30:36.385782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:43.262 [2024-11-26 18:30:36.385798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.293 ms 00:26:43.262 [2024-11-26 18:30:36.385809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.262 [2024-11-26 18:30:36.385907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.262 [2024-11-26 18:30:36.385922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:43.262 [2024-11-26 18:30:36.385933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:43.262 [2024-11-26 18:30:36.385943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.263 [2024-11-26 18:30:36.386105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:43.263 [2024-11-26 18:30:36.386124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:43.263 [2024-11-26 18:30:36.386134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:43.263 [2024-11-26 18:30:36.386144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:43.263 [2024-11-26 18:30:36.387392] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3452.447 ms, result 0 00:26:43.263 { 00:26:43.263 "name": "ftl0", 00:26:43.263 "uuid": "b1e00597-817c-4a37-b5b0-ed08a56b036d" 00:26:43.263 } 00:26:43.263 18:30:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:26:43.263 18:30:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:26:43.263 18:30:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:26:43.520 18:30:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:26:43.520 [2024-11-26 18:30:36.783478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:43.520 I/O size of 69632 is greater than zero copy threshold (65536). 00:26:43.520 Zero copy mechanism will not be used. 00:26:43.520 Running I/O for 4 seconds... 00:26:45.463 2449.00 IOPS, 162.63 MiB/s [2024-11-26T18:30:40.173Z] 2406.50 IOPS, 159.81 MiB/s [2024-11-26T18:30:41.155Z] 2326.33 IOPS, 154.48 MiB/s [2024-11-26T18:30:41.155Z] 2245.25 IOPS, 149.10 MiB/s 00:26:47.820 Latency(us) 00:26:47.820 [2024-11-26T18:30:41.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.820 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:26:47.820 ftl0 : 4.00 2244.45 149.05 0.00 0.00 466.13 208.38 2174.99 00:26:47.820 [2024-11-26T18:30:41.155Z] =================================================================================================================== 00:26:47.820 [2024-11-26T18:30:41.155Z] Total : 2244.45 149.05 0.00 0.00 466.13 208.38 2174.99 00:26:47.820 [2024-11-26 18:30:40.788284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:26:47.820 { 00:26:47.820 "results": [ 00:26:47.820 { 00:26:47.820 "job": "ftl0", 00:26:47.820 "core_mask": "0x1", 00:26:47.820 "workload": "randwrite", 00:26:47.820 "status": "finished", 00:26:47.820 "queue_depth": 1, 00:26:47.820 "io_size": 69632, 00:26:47.820 "runtime": 4.001872, 00:26:47.820 "iops": 2244.4495975883287, 00:26:47.820 "mibps": 149.04548108984994, 00:26:47.820 "io_failed": 0, 00:26:47.820 "io_timeout": 0, 00:26:47.820 "avg_latency_us": 466.12616129882275, 00:26:47.820 "min_latency_us": 208.37729257641922, 00:26:47.820 "max_latency_us": 2174.993886462882 00:26:47.820 } 00:26:47.820 ], 00:26:47.820 "core_count": 1 00:26:47.820 } 00:26:47.820 18:30:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:26:47.820 [2024-11-26 18:30:40.919900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:47.820 Running I/O for 4 seconds... 00:26:49.690 9854.00 IOPS, 38.49 MiB/s [2024-11-26T18:30:43.959Z] 9547.00 IOPS, 37.29 MiB/s [2024-11-26T18:30:45.335Z] 9616.33 IOPS, 37.56 MiB/s [2024-11-26T18:30:45.335Z] 9645.75 IOPS, 37.68 MiB/s 00:26:52.000 Latency(us) 00:26:52.000 [2024-11-26T18:30:45.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.000 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:26:52.000 ftl0 : 4.02 9636.69 37.64 0.00 0.00 13253.25 293.34 35944.64 00:26:52.000 [2024-11-26T18:30:45.335Z] =================================================================================================================== 00:26:52.000 [2024-11-26T18:30:45.335Z] Total : 9636.69 37.64 0.00 0.00 13253.25 0.00 35944.64 00:26:52.000 [2024-11-26 18:30:44.939868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:26:52.000 { 00:26:52.000 "results": [ 00:26:52.000 { 00:26:52.000 "job": "ftl0", 00:26:52.000 "core_mask": "0x1", 00:26:52.000 "workload": "randwrite", 00:26:52.000 "status": "finished", 00:26:52.000 "queue_depth": 128, 00:26:52.000 "io_size": 4096, 00:26:52.000 "runtime": 4.01663, 00:26:52.000 "iops": 9636.685480116415, 00:26:52.000 "mibps": 37.64330265670475, 00:26:52.000 "io_failed": 0, 00:26:52.000 "io_timeout": 0, 00:26:52.000 "avg_latency_us": 13253.254103705782, 00:26:52.000 "min_latency_us": 293.3379912663755, 00:26:52.000 "max_latency_us": 35944.635807860264 00:26:52.000 } 00:26:52.000 ], 00:26:52.000 "core_count": 1 00:26:52.000 } 00:26:52.000 18:30:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:26:52.000 [2024-11-26 18:30:45.074358] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:26:52.000 Running I/O for 4 seconds... 00:26:53.869 7740.00 IOPS, 30.23 MiB/s [2024-11-26T18:30:48.141Z] 7799.50 IOPS, 30.47 MiB/s [2024-11-26T18:30:49.127Z] 7822.33 IOPS, 30.56 MiB/s [2024-11-26T18:30:49.127Z] 7842.25 IOPS, 30.63 MiB/s 00:26:55.792 Latency(us) 00:26:55.792 [2024-11-26T18:30:49.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.792 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:55.792 Verification LBA range: start 0x0 length 0x1400000 00:26:55.792 ftl0 : 4.01 7854.76 30.68 0.00 0.00 16243.59 282.61 18086.79 00:26:55.792 [2024-11-26T18:30:49.127Z] =================================================================================================================== 00:26:55.792 [2024-11-26T18:30:49.127Z] Total : 7854.76 30.68 0.00 0.00 16243.59 0.00 18086.79 00:26:55.792 [2024-11-26 18:30:49.096554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:26:55.792 "results": [ 00:26:55.792 { 00:26:55.792 "job": "ftl0", 00:26:55.792 "core_mask": "0x1", 00:26:55.792 "workload": "verify", 00:26:55.792 "status": "finished", 00:26:55.792 "verify_range": { 00:26:55.792 "start": 0, 00:26:55.792 "length": 20971520 00:26:55.792 }, 00:26:55.792 "queue_depth": 128, 00:26:55.792 "io_size": 4096, 00:26:55.792 "runtime": 4.009923, 00:26:55.792 "iops": 7854.764293478952, 00:26:55.792 "mibps": 30.682673021402156, 00:26:55.792 "io_failed": 0, 00:26:55.792 "io_timeout": 0, 00:26:55.792 "avg_latency_us": 16243.58695050045, 00:26:55.792 "min_latency_us": 282.6061135371179, 00:26:55.792 "max_latency_us": 18086.791266375545 00:26:55.792 } 00:26:55.792 ], 00:26:55.792 "core_count": 1 00:26:55.792 } 00:26:55.792 l0 00:26:55.792 18:30:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:26:56.049 [2024-11-26 18:30:49.311963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.049 [2024-11-26 18:30:49.312106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:56.049 [2024-11-26 18:30:49.312141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:56.049 [2024-11-26 18:30:49.312164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.049 [2024-11-26 18:30:49.312200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:56.049 [2024-11-26 18:30:49.316391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.049 [2024-11-26 18:30:49.316468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:56.049 [2024-11-26 18:30:49.316499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.156 ms 00:26:56.049 [2024-11-26 18:30:49.316519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.049 [2024-11-26 18:30:49.318538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.049 [2024-11-26 18:30:49.318640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:56.049 [2024-11-26 18:30:49.318683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.981 ms 00:26:56.049 [2024-11-26 18:30:49.318708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.307 [2024-11-26 18:30:49.532523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.307 [2024-11-26 18:30:49.532703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:56.307 [2024-11-26 18:30:49.532736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 214.175 ms 00:26:56.307 [2024-11-26 18:30:49.532747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.307 [2024-11-26 18:30:49.538047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.307 [2024-11-26 18:30:49.538127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:56.307 [2024-11-26 18:30:49.538144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.260 ms 00:26:56.307 [2024-11-26 18:30:49.538155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.307 [2024-11-26 18:30:49.577774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.307 [2024-11-26 18:30:49.577838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:56.307 [2024-11-26 18:30:49.577855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.633 ms 00:26:56.307 [2024-11-26 18:30:49.577879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.307 [2024-11-26 18:30:49.600109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.307 [2024-11-26 18:30:49.600173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:56.307 [2024-11-26 18:30:49.600190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.187 ms 00:26:56.307 [2024-11-26 18:30:49.600199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.307 [2024-11-26 18:30:49.600384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.307 [2024-11-26 18:30:49.600400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:56.307 [2024-11-26 18:30:49.600413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:26:56.307 [2024-11-26 18:30:49.600421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.307 [2024-11-26 18:30:49.637107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.307 [2024-11-26 18:30:49.637168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:56.307 [2024-11-26 18:30:49.637185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.736 ms 00:26:56.307 [2024-11-26 18:30:49.637193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.567 [2024-11-26 18:30:49.677232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.567 [2024-11-26 18:30:49.677295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:56.567 [2024-11-26 18:30:49.677311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.052 ms 00:26:56.567 [2024-11-26 18:30:49.677319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.567 [2024-11-26 18:30:49.714863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.567 [2024-11-26 18:30:49.714927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:56.567 [2024-11-26 18:30:49.714943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.541 ms 00:26:56.567 [2024-11-26 18:30:49.714951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.567 [2024-11-26 18:30:49.751453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.567 [2024-11-26 18:30:49.751513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:56.567 [2024-11-26 18:30:49.751532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.432 ms 00:26:56.567 [2024-11-26 18:30:49.751540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.567 [2024-11-26 18:30:49.751597] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:56.567 [2024-11-26 18:30:49.751612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:56.567 [2024-11-26 18:30:49.751956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.751976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.751986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.751994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:56.568 [2024-11-26 18:30:49.752553] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:56.568 [2024-11-26 18:30:49.752563] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b1e00597-817c-4a37-b5b0-ed08a56b036d 00:26:56.568 [2024-11-26 18:30:49.752574] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:56.568 [2024-11-26 18:30:49.752583] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:56.568 [2024-11-26 18:30:49.752591] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:56.568 [2024-11-26 18:30:49.752600] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:56.568 [2024-11-26 18:30:49.752608] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:56.568 [2024-11-26 18:30:49.752617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:56.568 [2024-11-26 18:30:49.752625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:56.568 [2024-11-26 18:30:49.752643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:56.568 [2024-11-26 18:30:49.752651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:56.568 [2024-11-26 18:30:49.752661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.568 [2024-11-26 18:30:49.752670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:56.568 [2024-11-26 18:30:49.752680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:26:56.568 [2024-11-26 18:30:49.752688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.568 [2024-11-26 18:30:49.772350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.568 [2024-11-26 18:30:49.772405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:56.568 [2024-11-26 18:30:49.772420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.624 ms 00:26:56.568 [2024-11-26 18:30:49.772428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.568 [2024-11-26 18:30:49.773054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.568 [2024-11-26 18:30:49.773069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:56.568 [2024-11-26 18:30:49.773081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:26:56.568 [2024-11-26 18:30:49.773089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.568 [2024-11-26 18:30:49.826187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.568 [2024-11-26 18:30:49.826249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:56.568 [2024-11-26 18:30:49.826267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.568 [2024-11-26 18:30:49.826274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.568 [2024-11-26 18:30:49.826349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.568 [2024-11-26 18:30:49.826357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:56.568 [2024-11-26 18:30:49.826367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.568 [2024-11-26 18:30:49.826375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.568 [2024-11-26 18:30:49.826471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.568 [2024-11-26 18:30:49.826484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:56.568 [2024-11-26 18:30:49.826494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.568 [2024-11-26 18:30:49.826501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.569 [2024-11-26 18:30:49.826519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.569 [2024-11-26 18:30:49.826527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:56.569 [2024-11-26 18:30:49.826536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.569 [2024-11-26 18:30:49.826543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.827 [2024-11-26 18:30:49.953575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.827 [2024-11-26 18:30:49.953654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:56.827 [2024-11-26 18:30:49.953674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.827 [2024-11-26 18:30:49.953682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.827 [2024-11-26 18:30:50.057289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.827 [2024-11-26 18:30:50.057348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:56.827 [2024-11-26 18:30:50.057362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.827 [2024-11-26 18:30:50.057371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.827 [2024-11-26 18:30:50.057484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.827 [2024-11-26 18:30:50.057494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:56.827 [2024-11-26 18:30:50.057504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.827 [2024-11-26 18:30:50.057512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.827 [2024-11-26 18:30:50.057556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.827 [2024-11-26 18:30:50.057565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:56.827 [2024-11-26 18:30:50.057574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.827 [2024-11-26 18:30:50.057581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.827 [2024-11-26 18:30:50.057729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.827 [2024-11-26 18:30:50.057745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:56.827 [2024-11-26 18:30:50.057759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.827 [2024-11-26 18:30:50.057767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.827 [2024-11-26 18:30:50.057809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.827 [2024-11-26 18:30:50.057820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:56.827 [2024-11-26 18:30:50.057829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.827 [2024-11-26 18:30:50.057837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.827 [2024-11-26 18:30:50.057876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.827 [2024-11-26 18:30:50.057887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:56.827 [2024-11-26 18:30:50.057897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.827 [2024-11-26 18:30:50.057914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.827 [2024-11-26 18:30:50.057960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:56.827 [2024-11-26 18:30:50.057969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:56.827 [2024-11-26 18:30:50.057980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:56.827 [2024-11-26 18:30:50.057992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.827 [2024-11-26 18:30:50.058134] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 747.559 ms, result 0 00:26:56.827 true 00:26:56.827 18:30:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78598 00:26:56.827 18:30:50 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78598 ']' 00:26:56.827 18:30:50 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78598 00:26:56.827 18:30:50 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:56.827 18:30:50 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.827 18:30:50 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78598 00:26:56.827 killing process with pid 78598 00:26:56.827 Received shutdown signal, test time was about 4.000000 seconds 00:26:56.827 00:26:56.827 Latency(us) 00:26:56.827 [2024-11-26T18:30:50.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.827 [2024-11-26T18:30:50.162Z] =================================================================================================================== 00:26:56.827 [2024-11-26T18:30:50.162Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.827 18:30:50 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:56.828 18:30:50 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:56.828 18:30:50 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78598' 00:26:56.828 18:30:50 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78598 00:26:56.828 18:30:50 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78598 00:27:02.090 18:30:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:02.090 Remove shared memory files 00:27:02.090 18:30:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:27:02.090 18:30:55 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:02.090 18:30:55 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:27:02.090 18:30:55 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:27:02.090 18:30:55 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:27:02.090 18:30:55 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:02.090 18:30:55 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:27:02.090 ************************************ 00:27:02.090 END TEST ftl_bdevperf 00:27:02.090 ************************************ 00:27:02.090 00:27:02.090 real 0m26.952s 00:27:02.090 user 0m30.165s 00:27:02.090 sys 0m1.327s 00:27:02.090 18:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.090 18:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:02.090 18:30:55 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:02.090 18:30:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:02.090 18:30:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.090 18:30:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:02.091 ************************************ 00:27:02.091 START TEST ftl_trim 00:27:02.091 ************************************ 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:02.091 * Looking for test storage... 00:27:02.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:02.091 18:30:55 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:02.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.091 --rc genhtml_branch_coverage=1 00:27:02.091 --rc genhtml_function_coverage=1 00:27:02.091 --rc genhtml_legend=1 00:27:02.091 --rc geninfo_all_blocks=1 00:27:02.091 --rc geninfo_unexecuted_blocks=1 00:27:02.091 00:27:02.091 ' 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:02.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.091 --rc genhtml_branch_coverage=1 00:27:02.091 --rc genhtml_function_coverage=1 00:27:02.091 --rc genhtml_legend=1 00:27:02.091 --rc geninfo_all_blocks=1 00:27:02.091 --rc geninfo_unexecuted_blocks=1 00:27:02.091 00:27:02.091 ' 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:02.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.091 --rc genhtml_branch_coverage=1 00:27:02.091 --rc genhtml_function_coverage=1 00:27:02.091 --rc genhtml_legend=1 00:27:02.091 --rc geninfo_all_blocks=1 00:27:02.091 --rc geninfo_unexecuted_blocks=1 00:27:02.091 00:27:02.091 ' 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:02.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.091 --rc genhtml_branch_coverage=1 00:27:02.091 --rc genhtml_function_coverage=1 00:27:02.091 --rc genhtml_legend=1 00:27:02.091 --rc geninfo_all_blocks=1 00:27:02.091 --rc geninfo_unexecuted_blocks=1 00:27:02.091 00:27:02.091 ' 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78990 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:02.091 18:30:55 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78990 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78990 ']' 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.091 18:30:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:02.350 [2024-11-26 18:30:55.486430] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:27:02.350 [2024-11-26 18:30:55.486638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78990 ] 00:27:02.350 [2024-11-26 18:30:55.662414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:02.609 [2024-11-26 18:30:55.784101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.609 [2024-11-26 18:30:55.784242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.609 [2024-11-26 18:30:55.784279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.547 18:30:56 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.547 18:30:56 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:03.548 18:30:56 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:03.548 18:30:56 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:27:03.548 18:30:56 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:03.548 18:30:56 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:27:03.548 18:30:56 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:27:03.548 18:30:56 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:03.806 18:30:56 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:03.806 18:30:56 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:27:03.806 18:30:56 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:03.806 18:30:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:03.806 18:30:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:03.806 18:30:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:03.806 18:30:56 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:03.806 18:30:56 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:04.064 18:30:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:04.064 { 00:27:04.064 "name": "nvme0n1", 00:27:04.064 "aliases": [ 00:27:04.064 "a798e882-c15a-4418-b2a8-1ec578296841" 00:27:04.064 ], 00:27:04.064 "product_name": "NVMe disk", 00:27:04.064 "block_size": 4096, 00:27:04.064 "num_blocks": 1310720, 00:27:04.064 "uuid": "a798e882-c15a-4418-b2a8-1ec578296841", 00:27:04.064 "numa_id": -1, 00:27:04.064 "assigned_rate_limits": { 00:27:04.064 "rw_ios_per_sec": 0, 00:27:04.064 "rw_mbytes_per_sec": 0, 00:27:04.064 "r_mbytes_per_sec": 0, 00:27:04.064 "w_mbytes_per_sec": 0 00:27:04.064 }, 00:27:04.064 "claimed": true, 00:27:04.064 "claim_type": "read_many_write_one", 00:27:04.064 "zoned": false, 00:27:04.064 "supported_io_types": { 00:27:04.064 "read": true, 00:27:04.064 "write": true, 00:27:04.064 "unmap": true, 00:27:04.064 "flush": true, 00:27:04.064 "reset": true, 00:27:04.064 "nvme_admin": true, 00:27:04.064 "nvme_io": true, 00:27:04.064 "nvme_io_md": false, 00:27:04.064 "write_zeroes": true, 00:27:04.064 "zcopy": false, 00:27:04.064 "get_zone_info": false, 00:27:04.064 "zone_management": false, 00:27:04.064 "zone_append": false, 00:27:04.064 "compare": true, 00:27:04.064 "compare_and_write": false, 00:27:04.064 "abort": true, 00:27:04.064 "seek_hole": false, 00:27:04.064 "seek_data": false, 00:27:04.064 "copy": true, 00:27:04.064 "nvme_iov_md": false 00:27:04.064 }, 00:27:04.064 "driver_specific": { 00:27:04.064 "nvme": [ 00:27:04.064 { 00:27:04.064 "pci_address": "0000:00:11.0", 00:27:04.064 "trid": { 00:27:04.064 "trtype": "PCIe", 00:27:04.064 "traddr": "0000:00:11.0" 00:27:04.064 }, 00:27:04.064 "ctrlr_data": { 00:27:04.064 "cntlid": 0, 00:27:04.064 "vendor_id": "0x1b36", 00:27:04.064 "model_number": "QEMU NVMe Ctrl", 00:27:04.064 "serial_number": "12341", 00:27:04.064 "firmware_revision": "8.0.0", 00:27:04.064 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:04.064 "oacs": { 00:27:04.064 "security": 0, 00:27:04.064 "format": 1, 00:27:04.064 "firmware": 0, 00:27:04.064 "ns_manage": 1 00:27:04.064 }, 00:27:04.064 "multi_ctrlr": false, 00:27:04.064 "ana_reporting": false 00:27:04.064 }, 00:27:04.064 "vs": { 00:27:04.064 "nvme_version": "1.4" 00:27:04.064 }, 00:27:04.064 "ns_data": { 00:27:04.064 "id": 1, 00:27:04.064 "can_share": false 00:27:04.064 } 00:27:04.064 } 00:27:04.064 ], 00:27:04.064 "mp_policy": "active_passive" 00:27:04.064 } 00:27:04.064 } 00:27:04.064 ]' 00:27:04.064 18:30:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:04.064 18:30:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:04.064 18:30:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:04.064 18:30:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:04.064 18:30:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:04.064 18:30:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:27:04.064 18:30:57 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:27:04.064 18:30:57 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:04.064 18:30:57 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:27:04.064 18:30:57 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:04.064 18:30:57 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:04.323 18:30:57 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=d79076db-6bc2-4ad5-bd7b-65dc1f2c7ea8 00:27:04.323 18:30:57 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:27:04.323 18:30:57 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d79076db-6bc2-4ad5-bd7b-65dc1f2c7ea8 00:27:04.581 18:30:57 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:04.581 18:30:57 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=786c9a70-1f13-44ae-bd8e-ab77ddfcc3a0 00:27:04.581 18:30:57 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 786c9a70-1f13-44ae-bd8e-ab77ddfcc3a0 00:27:04.839 18:30:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:04.839 18:30:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:04.839 18:30:58 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:27:04.839 18:30:58 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:04.839 18:30:58 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:04.839 18:30:58 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:27:04.839 18:30:58 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:04.839 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:04.839 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:04.839 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:04.839 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:04.839 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:05.098 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:05.098 { 00:27:05.098 "name": "27fb2c7b-6c86-4be6-a517-d6e0f07f8576", 00:27:05.098 "aliases": [ 00:27:05.098 "lvs/nvme0n1p0" 00:27:05.098 ], 00:27:05.098 "product_name": "Logical Volume", 00:27:05.098 "block_size": 4096, 00:27:05.098 "num_blocks": 26476544, 00:27:05.098 "uuid": "27fb2c7b-6c86-4be6-a517-d6e0f07f8576", 00:27:05.098 "assigned_rate_limits": { 00:27:05.098 "rw_ios_per_sec": 0, 00:27:05.098 "rw_mbytes_per_sec": 0, 00:27:05.098 "r_mbytes_per_sec": 0, 00:27:05.098 "w_mbytes_per_sec": 0 00:27:05.098 }, 00:27:05.098 "claimed": false, 00:27:05.098 "zoned": false, 00:27:05.098 "supported_io_types": { 00:27:05.098 "read": true, 00:27:05.098 "write": true, 00:27:05.098 "unmap": true, 00:27:05.098 "flush": false, 00:27:05.098 "reset": true, 00:27:05.098 "nvme_admin": false, 00:27:05.098 "nvme_io": false, 00:27:05.098 "nvme_io_md": false, 00:27:05.098 "write_zeroes": true, 00:27:05.098 "zcopy": false, 00:27:05.098 "get_zone_info": false, 00:27:05.098 "zone_management": false, 00:27:05.098 "zone_append": false, 00:27:05.098 "compare": false, 00:27:05.098 "compare_and_write": false, 00:27:05.098 "abort": false, 00:27:05.098 "seek_hole": true, 00:27:05.098 "seek_data": true, 00:27:05.098 "copy": false, 00:27:05.098 "nvme_iov_md": false 00:27:05.098 }, 00:27:05.098 "driver_specific": { 00:27:05.098 "lvol": { 00:27:05.098 "lvol_store_uuid": "786c9a70-1f13-44ae-bd8e-ab77ddfcc3a0", 00:27:05.098 "base_bdev": "nvme0n1", 00:27:05.098 "thin_provision": true, 00:27:05.098 "num_allocated_clusters": 0, 00:27:05.098 "snapshot": false, 00:27:05.098 "clone": false, 00:27:05.098 "esnap_clone": false 00:27:05.098 } 00:27:05.098 } 00:27:05.098 } 00:27:05.098 ]' 00:27:05.098 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:05.098 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:05.098 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:05.098 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:05.098 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:05.098 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:05.098 18:30:58 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:27:05.098 18:30:58 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:27:05.098 18:30:58 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:05.356 18:30:58 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:05.356 18:30:58 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:05.356 18:30:58 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:05.356 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:05.356 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:05.356 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:05.356 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:05.356 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:05.630 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:05.630 { 00:27:05.630 "name": "27fb2c7b-6c86-4be6-a517-d6e0f07f8576", 00:27:05.630 "aliases": [ 00:27:05.630 "lvs/nvme0n1p0" 00:27:05.630 ], 00:27:05.630 "product_name": "Logical Volume", 00:27:05.630 "block_size": 4096, 00:27:05.630 "num_blocks": 26476544, 00:27:05.630 "uuid": "27fb2c7b-6c86-4be6-a517-d6e0f07f8576", 00:27:05.630 "assigned_rate_limits": { 00:27:05.630 "rw_ios_per_sec": 0, 00:27:05.630 "rw_mbytes_per_sec": 0, 00:27:05.630 "r_mbytes_per_sec": 0, 00:27:05.630 "w_mbytes_per_sec": 0 00:27:05.630 }, 00:27:05.630 "claimed": false, 00:27:05.630 "zoned": false, 00:27:05.630 "supported_io_types": { 00:27:05.630 "read": true, 00:27:05.630 "write": true, 00:27:05.630 "unmap": true, 00:27:05.630 "flush": false, 00:27:05.630 "reset": true, 00:27:05.630 "nvme_admin": false, 00:27:05.630 "nvme_io": false, 00:27:05.630 "nvme_io_md": false, 00:27:05.630 "write_zeroes": true, 00:27:05.630 "zcopy": false, 00:27:05.630 "get_zone_info": false, 00:27:05.630 "zone_management": false, 00:27:05.630 "zone_append": false, 00:27:05.630 "compare": false, 00:27:05.630 "compare_and_write": false, 00:27:05.630 "abort": false, 00:27:05.630 "seek_hole": true, 00:27:05.630 "seek_data": true, 00:27:05.630 "copy": false, 00:27:05.630 "nvme_iov_md": false 00:27:05.630 }, 00:27:05.630 "driver_specific": { 00:27:05.630 "lvol": { 00:27:05.630 "lvol_store_uuid": "786c9a70-1f13-44ae-bd8e-ab77ddfcc3a0", 00:27:05.630 "base_bdev": "nvme0n1", 00:27:05.630 "thin_provision": true, 00:27:05.630 "num_allocated_clusters": 0, 00:27:05.630 "snapshot": false, 00:27:05.630 "clone": false, 00:27:05.630 "esnap_clone": false 00:27:05.630 } 00:27:05.630 } 00:27:05.630 } 00:27:05.630 ]' 00:27:05.630 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:05.630 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:05.630 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:05.630 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:05.630 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:05.630 18:30:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:05.630 18:30:58 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:27:05.630 18:30:58 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:05.907 18:30:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:27:05.907 18:30:59 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:27:05.907 18:30:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:05.907 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:05.907 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:05.907 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:27:05.907 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:27:05.907 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27fb2c7b-6c86-4be6-a517-d6e0f07f8576 00:27:06.164 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:06.164 { 00:27:06.164 "name": "27fb2c7b-6c86-4be6-a517-d6e0f07f8576", 00:27:06.164 "aliases": [ 00:27:06.164 "lvs/nvme0n1p0" 00:27:06.164 ], 00:27:06.164 "product_name": "Logical Volume", 00:27:06.164 "block_size": 4096, 00:27:06.164 "num_blocks": 26476544, 00:27:06.164 "uuid": "27fb2c7b-6c86-4be6-a517-d6e0f07f8576", 00:27:06.164 "assigned_rate_limits": { 00:27:06.164 "rw_ios_per_sec": 0, 00:27:06.164 "rw_mbytes_per_sec": 0, 00:27:06.164 "r_mbytes_per_sec": 0, 00:27:06.164 "w_mbytes_per_sec": 0 00:27:06.164 }, 00:27:06.164 "claimed": false, 00:27:06.164 "zoned": false, 00:27:06.164 "supported_io_types": { 00:27:06.164 "read": true, 00:27:06.164 "write": true, 00:27:06.164 "unmap": true, 00:27:06.164 "flush": false, 00:27:06.164 "reset": true, 00:27:06.164 "nvme_admin": false, 00:27:06.164 "nvme_io": false, 00:27:06.164 "nvme_io_md": false, 00:27:06.164 "write_zeroes": true, 00:27:06.164 "zcopy": false, 00:27:06.164 "get_zone_info": false, 00:27:06.164 "zone_management": false, 00:27:06.164 "zone_append": false, 00:27:06.164 "compare": false, 00:27:06.164 "compare_and_write": false, 00:27:06.164 "abort": false, 00:27:06.164 "seek_hole": true, 00:27:06.165 "seek_data": true, 00:27:06.165 "copy": false, 00:27:06.165 "nvme_iov_md": false 00:27:06.165 }, 00:27:06.165 "driver_specific": { 00:27:06.165 "lvol": { 00:27:06.165 "lvol_store_uuid": "786c9a70-1f13-44ae-bd8e-ab77ddfcc3a0", 00:27:06.165 "base_bdev": "nvme0n1", 00:27:06.165 "thin_provision": true, 00:27:06.165 "num_allocated_clusters": 0, 00:27:06.165 "snapshot": false, 00:27:06.165 "clone": false, 00:27:06.165 "esnap_clone": false 00:27:06.165 } 00:27:06.165 } 00:27:06.165 } 00:27:06.165 ]' 00:27:06.165 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:06.165 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:27:06.165 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:06.165 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:06.165 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:06.165 18:30:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:27:06.165 18:30:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:27:06.165 18:30:59 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 27fb2c7b-6c86-4be6-a517-d6e0f07f8576 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:27:06.423 [2024-11-26 18:30:59.620675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.423 [2024-11-26 18:30:59.620813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:06.423 [2024-11-26 18:30:59.620835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:06.423 [2024-11-26 18:30:59.620860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.423 [2024-11-26 18:30:59.624119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.423 [2024-11-26 18:30:59.624195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:06.423 [2024-11-26 18:30:59.624211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:27:06.423 [2024-11-26 18:30:59.624219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.423 [2024-11-26 18:30:59.624355] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:06.423 [2024-11-26 18:30:59.625318] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:06.423 [2024-11-26 18:30:59.625352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.423 [2024-11-26 18:30:59.625361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:06.423 [2024-11-26 18:30:59.625371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:27:06.423 [2024-11-26 18:30:59.625379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.423 [2024-11-26 18:30:59.625491] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b 00:27:06.423 [2024-11-26 18:30:59.626919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.424 [2024-11-26 18:30:59.626953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:06.424 [2024-11-26 18:30:59.626964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:06.424 [2024-11-26 18:30:59.626973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.424 [2024-11-26 18:30:59.634627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.424 [2024-11-26 18:30:59.634673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:06.424 [2024-11-26 18:30:59.634685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.585 ms 00:27:06.424 [2024-11-26 18:30:59.634697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.424 [2024-11-26 18:30:59.634829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.424 [2024-11-26 18:30:59.634844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:06.424 [2024-11-26 18:30:59.634853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:06.424 [2024-11-26 18:30:59.634865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.424 [2024-11-26 18:30:59.634904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.424 [2024-11-26 18:30:59.634915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:06.424 [2024-11-26 18:30:59.634922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:06.424 [2024-11-26 18:30:59.634933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.424 [2024-11-26 18:30:59.634971] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:06.424 [2024-11-26 18:30:59.640287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.424 [2024-11-26 18:30:59.640319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:06.424 [2024-11-26 18:30:59.640331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.332 ms 00:27:06.424 [2024-11-26 18:30:59.640339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.424 [2024-11-26 18:30:59.640422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.424 [2024-11-26 18:30:59.640455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:06.424 [2024-11-26 18:30:59.640468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:06.424 [2024-11-26 18:30:59.640476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.424 [2024-11-26 18:30:59.640516] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:06.424 [2024-11-26 18:30:59.640675] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:06.424 [2024-11-26 18:30:59.640696] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:06.424 [2024-11-26 18:30:59.640715] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:06.424 [2024-11-26 18:30:59.640741] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:06.424 [2024-11-26 18:30:59.640751] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:06.424 [2024-11-26 18:30:59.640763] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:06.424 [2024-11-26 18:30:59.640772] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:06.424 [2024-11-26 18:30:59.640788] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:06.424 [2024-11-26 18:30:59.640795] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:06.424 [2024-11-26 18:30:59.640808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.424 [2024-11-26 18:30:59.640816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:06.424 [2024-11-26 18:30:59.640829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:27:06.424 [2024-11-26 18:30:59.640837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.424 [2024-11-26 18:30:59.640929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.424 [2024-11-26 18:30:59.640938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:06.424 [2024-11-26 18:30:59.640950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:06.424 [2024-11-26 18:30:59.640957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.424 [2024-11-26 18:30:59.641083] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:06.424 [2024-11-26 18:30:59.641096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:06.424 [2024-11-26 18:30:59.641106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:06.424 [2024-11-26 18:30:59.641113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.424 [2024-11-26 18:30:59.641124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:06.424 [2024-11-26 18:30:59.641131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:06.424 [2024-11-26 18:30:59.641140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:06.424 [2024-11-26 18:30:59.641147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:06.424 [2024-11-26 18:30:59.641156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:06.424 [2024-11-26 18:30:59.641163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:06.424 [2024-11-26 18:30:59.641172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:06.424 [2024-11-26 18:30:59.641179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:06.424 [2024-11-26 18:30:59.641187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:06.424 [2024-11-26 18:30:59.641194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:06.424 [2024-11-26 18:30:59.641203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:06.424 [2024-11-26 18:30:59.641210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.424 [2024-11-26 18:30:59.641220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:06.424 [2024-11-26 18:30:59.641226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:06.424 [2024-11-26 18:30:59.641234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.424 [2024-11-26 18:30:59.641242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:06.424 [2024-11-26 18:30:59.641251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:06.424 [2024-11-26 18:30:59.641258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.424 [2024-11-26 18:30:59.641267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:06.424 [2024-11-26 18:30:59.641274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:06.424 [2024-11-26 18:30:59.641282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.424 [2024-11-26 18:30:59.641288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:06.425 [2024-11-26 18:30:59.641297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:06.425 [2024-11-26 18:30:59.641303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.425 [2024-11-26 18:30:59.641311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:06.425 [2024-11-26 18:30:59.641318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:06.425 [2024-11-26 18:30:59.641326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:06.425 [2024-11-26 18:30:59.641333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:06.425 [2024-11-26 18:30:59.641343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:06.425 [2024-11-26 18:30:59.641349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:06.425 [2024-11-26 18:30:59.641357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:06.425 [2024-11-26 18:30:59.641364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:06.425 [2024-11-26 18:30:59.641372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:06.425 [2024-11-26 18:30:59.641379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:06.425 [2024-11-26 18:30:59.641388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:06.425 [2024-11-26 18:30:59.641394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.425 [2024-11-26 18:30:59.641402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:06.425 [2024-11-26 18:30:59.641409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:06.425 [2024-11-26 18:30:59.641417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.425 [2024-11-26 18:30:59.641423] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:06.425 [2024-11-26 18:30:59.641433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:06.425 [2024-11-26 18:30:59.641440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:06.425 [2024-11-26 18:30:59.641449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:06.425 [2024-11-26 18:30:59.641457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:06.425 [2024-11-26 18:30:59.641469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:06.425 [2024-11-26 18:30:59.641476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:06.425 [2024-11-26 18:30:59.641485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:06.425 [2024-11-26 18:30:59.641491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:06.425 [2024-11-26 18:30:59.641500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:06.425 [2024-11-26 18:30:59.641511] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:06.425 [2024-11-26 18:30:59.641522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.425 [2024-11-26 18:30:59.641534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:06.425 [2024-11-26 18:30:59.641543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:06.425 [2024-11-26 18:30:59.641550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:06.425 [2024-11-26 18:30:59.641560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:06.425 [2024-11-26 18:30:59.641567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:06.425 [2024-11-26 18:30:59.641576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:06.425 [2024-11-26 18:30:59.641583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:06.425 [2024-11-26 18:30:59.641592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:06.425 [2024-11-26 18:30:59.641599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:06.425 [2024-11-26 18:30:59.641610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:06.425 [2024-11-26 18:30:59.641626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:06.425 [2024-11-26 18:30:59.641636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:06.425 [2024-11-26 18:30:59.641643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:06.425 [2024-11-26 18:30:59.641654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:06.425 [2024-11-26 18:30:59.641661] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:06.425 [2024-11-26 18:30:59.641671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.425 [2024-11-26 18:30:59.641679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:06.425 [2024-11-26 18:30:59.641688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:06.425 [2024-11-26 18:30:59.641695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:06.425 [2024-11-26 18:30:59.641704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:06.425 [2024-11-26 18:30:59.641712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:06.425 [2024-11-26 18:30:59.641722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:06.425 [2024-11-26 18:30:59.641731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:27:06.425 [2024-11-26 18:30:59.641740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.425 [2024-11-26 18:30:59.641823] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:06.425 [2024-11-26 18:30:59.641841] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:10.642 [2024-11-26 18:31:03.799578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.642 [2024-11-26 18:31:03.799665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:10.642 [2024-11-26 18:31:03.799681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4165.767 ms 00:27:10.642 [2024-11-26 18:31:03.799691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.642 [2024-11-26 18:31:03.841695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.642 [2024-11-26 18:31:03.841766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:10.642 [2024-11-26 18:31:03.841782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.645 ms 00:27:10.642 [2024-11-26 18:31:03.841793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.642 [2024-11-26 18:31:03.841977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.642 [2024-11-26 18:31:03.841992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:10.642 [2024-11-26 18:31:03.842031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:10.642 [2024-11-26 18:31:03.842049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.642 [2024-11-26 18:31:03.906054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.642 [2024-11-26 18:31:03.906118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:10.642 [2024-11-26 18:31:03.906132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.088 ms 00:27:10.642 [2024-11-26 18:31:03.906144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.642 [2024-11-26 18:31:03.906274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.642 [2024-11-26 18:31:03.906287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:10.642 [2024-11-26 18:31:03.906296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:10.642 [2024-11-26 18:31:03.906306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.642 [2024-11-26 18:31:03.906811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.642 [2024-11-26 18:31:03.906828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:10.642 [2024-11-26 18:31:03.906838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:27:10.642 [2024-11-26 18:31:03.906849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.642 [2024-11-26 18:31:03.906971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.642 [2024-11-26 18:31:03.906985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:10.642 [2024-11-26 18:31:03.907019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:10.642 [2024-11-26 18:31:03.907034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.642 [2024-11-26 18:31:03.929397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.642 [2024-11-26 18:31:03.929463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:10.642 [2024-11-26 18:31:03.929479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.368 ms 00:27:10.642 [2024-11-26 18:31:03.929491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.642 [2024-11-26 18:31:03.945826] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:10.642 [2024-11-26 18:31:03.964627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.642 [2024-11-26 18:31:03.964693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:10.642 [2024-11-26 18:31:03.964732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.984 ms 00:27:10.642 [2024-11-26 18:31:03.964742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.902 [2024-11-26 18:31:04.084603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.902 [2024-11-26 18:31:04.084688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:10.902 [2024-11-26 18:31:04.084715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.955 ms 00:27:10.902 [2024-11-26 18:31:04.084726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.902 [2024-11-26 18:31:04.085011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.902 [2024-11-26 18:31:04.085031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:10.902 [2024-11-26 18:31:04.085046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:27:10.902 [2024-11-26 18:31:04.085055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.902 [2024-11-26 18:31:04.130348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.902 [2024-11-26 18:31:04.130490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:10.902 [2024-11-26 18:31:04.130515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.329 ms 00:27:10.902 [2024-11-26 18:31:04.130529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.902 [2024-11-26 18:31:04.175479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.902 [2024-11-26 18:31:04.175539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:10.902 [2024-11-26 18:31:04.175558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.874 ms 00:27:10.902 [2024-11-26 18:31:04.175568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.902 [2024-11-26 18:31:04.176505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.902 [2024-11-26 18:31:04.176537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:10.902 [2024-11-26 18:31:04.176551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:27:10.902 [2024-11-26 18:31:04.176560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.162 [2024-11-26 18:31:04.304596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.162 [2024-11-26 18:31:04.304675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:11.162 [2024-11-26 18:31:04.304719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 128.221 ms 00:27:11.162 [2024-11-26 18:31:04.304729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.162 [2024-11-26 18:31:04.349175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.162 [2024-11-26 18:31:04.349246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:11.162 [2024-11-26 18:31:04.349266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.349 ms 00:27:11.162 [2024-11-26 18:31:04.349275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.162 [2024-11-26 18:31:04.393964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.162 [2024-11-26 18:31:04.394141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:11.162 [2024-11-26 18:31:04.394164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.620 ms 00:27:11.162 [2024-11-26 18:31:04.394173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.162 [2024-11-26 18:31:04.434652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.162 [2024-11-26 18:31:04.434727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:11.162 [2024-11-26 18:31:04.434763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.406 ms 00:27:11.162 [2024-11-26 18:31:04.434772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.162 [2024-11-26 18:31:04.434903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.162 [2024-11-26 18:31:04.434917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:11.162 [2024-11-26 18:31:04.434932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:11.162 [2024-11-26 18:31:04.434939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.162 [2024-11-26 18:31:04.435021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:11.162 [2024-11-26 18:31:04.435031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:11.162 [2024-11-26 18:31:04.435042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:11.162 [2024-11-26 18:31:04.435049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.162 [2024-11-26 18:31:04.436051] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:11.162 [2024-11-26 18:31:04.440940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4824.369 ms, result 0 00:27:11.162 [2024-11-26 18:31:04.441989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:11.162 { 00:27:11.162 "name": "ftl0", 00:27:11.162 "uuid": "36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b" 00:27:11.162 } 00:27:11.162 18:31:04 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:27:11.162 18:31:04 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:27:11.162 18:31:04 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:11.162 18:31:04 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:27:11.162 18:31:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:11.162 18:31:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:11.162 18:31:04 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:11.422 18:31:04 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:11.682 [ 00:27:11.682 { 00:27:11.682 "name": "ftl0", 00:27:11.682 "aliases": [ 00:27:11.682 "36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b" 00:27:11.682 ], 00:27:11.682 "product_name": "FTL disk", 00:27:11.682 "block_size": 4096, 00:27:11.682 "num_blocks": 23592960, 00:27:11.682 "uuid": "36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b", 00:27:11.682 "assigned_rate_limits": { 00:27:11.682 "rw_ios_per_sec": 0, 00:27:11.682 "rw_mbytes_per_sec": 0, 00:27:11.682 "r_mbytes_per_sec": 0, 00:27:11.682 "w_mbytes_per_sec": 0 00:27:11.682 }, 00:27:11.682 "claimed": false, 00:27:11.682 "zoned": false, 00:27:11.682 "supported_io_types": { 00:27:11.682 "read": true, 00:27:11.682 "write": true, 00:27:11.682 "unmap": true, 00:27:11.682 "flush": true, 00:27:11.682 "reset": false, 00:27:11.682 "nvme_admin": false, 00:27:11.682 "nvme_io": false, 00:27:11.682 "nvme_io_md": false, 00:27:11.682 "write_zeroes": true, 00:27:11.682 "zcopy": false, 00:27:11.682 "get_zone_info": false, 00:27:11.682 "zone_management": false, 00:27:11.682 "zone_append": false, 00:27:11.682 "compare": false, 00:27:11.682 "compare_and_write": false, 00:27:11.682 "abort": false, 00:27:11.682 "seek_hole": false, 00:27:11.682 "seek_data": false, 00:27:11.682 "copy": false, 00:27:11.682 "nvme_iov_md": false 00:27:11.682 }, 00:27:11.682 "driver_specific": { 00:27:11.682 "ftl": { 00:27:11.682 "base_bdev": "27fb2c7b-6c86-4be6-a517-d6e0f07f8576", 00:27:11.682 "cache": "nvc0n1p0" 00:27:11.682 } 00:27:11.682 } 00:27:11.682 } 00:27:11.682 ] 00:27:11.682 18:31:04 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:27:11.682 18:31:04 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:27:11.682 18:31:04 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:11.943 18:31:05 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:27:11.943 18:31:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:27:12.203 18:31:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:27:12.204 { 00:27:12.204 "name": "ftl0", 00:27:12.204 "aliases": [ 00:27:12.204 "36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b" 00:27:12.204 ], 00:27:12.204 "product_name": "FTL disk", 00:27:12.204 "block_size": 4096, 00:27:12.204 "num_blocks": 23592960, 00:27:12.204 "uuid": "36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b", 00:27:12.204 "assigned_rate_limits": { 00:27:12.204 "rw_ios_per_sec": 0, 00:27:12.204 "rw_mbytes_per_sec": 0, 00:27:12.204 "r_mbytes_per_sec": 0, 00:27:12.204 "w_mbytes_per_sec": 0 00:27:12.204 }, 00:27:12.204 "claimed": false, 00:27:12.204 "zoned": false, 00:27:12.204 "supported_io_types": { 00:27:12.204 "read": true, 00:27:12.204 "write": true, 00:27:12.204 "unmap": true, 00:27:12.204 "flush": true, 00:27:12.204 "reset": false, 00:27:12.204 "nvme_admin": false, 00:27:12.204 "nvme_io": false, 00:27:12.204 "nvme_io_md": false, 00:27:12.204 "write_zeroes": true, 00:27:12.204 "zcopy": false, 00:27:12.204 "get_zone_info": false, 00:27:12.204 "zone_management": false, 00:27:12.204 "zone_append": false, 00:27:12.204 "compare": false, 00:27:12.204 "compare_and_write": false, 00:27:12.204 "abort": false, 00:27:12.204 "seek_hole": false, 00:27:12.204 "seek_data": false, 00:27:12.204 "copy": false, 00:27:12.204 "nvme_iov_md": false 00:27:12.204 }, 00:27:12.204 "driver_specific": { 00:27:12.204 "ftl": { 00:27:12.204 "base_bdev": "27fb2c7b-6c86-4be6-a517-d6e0f07f8576", 00:27:12.204 "cache": "nvc0n1p0" 00:27:12.204 } 00:27:12.204 } 00:27:12.204 } 00:27:12.204 ]' 00:27:12.204 18:31:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:27:12.204 18:31:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:27:12.204 18:31:05 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:12.464 [2024-11-26 18:31:05.690424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.464 [2024-11-26 18:31:05.690496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:12.464 [2024-11-26 18:31:05.690515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:12.464 [2024-11-26 18:31:05.690527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.464 [2024-11-26 18:31:05.690575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:12.464 [2024-11-26 18:31:05.695587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.464 [2024-11-26 18:31:05.695642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:12.464 [2024-11-26 18:31:05.695661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.996 ms 00:27:12.464 [2024-11-26 18:31:05.695670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.464 [2024-11-26 18:31:05.696338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.464 [2024-11-26 18:31:05.696371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:12.464 [2024-11-26 18:31:05.696384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:27:12.464 [2024-11-26 18:31:05.696393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.465 [2024-11-26 18:31:05.699972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.465 [2024-11-26 18:31:05.700007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:12.465 [2024-11-26 18:31:05.700019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:27:12.465 [2024-11-26 18:31:05.700028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.465 [2024-11-26 18:31:05.706399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.465 [2024-11-26 18:31:05.706468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:12.465 [2024-11-26 18:31:05.706484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.311 ms 00:27:12.465 [2024-11-26 18:31:05.706492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.465 [2024-11-26 18:31:05.749585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.465 [2024-11-26 18:31:05.749671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:12.465 [2024-11-26 18:31:05.749694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.005 ms 00:27:12.465 [2024-11-26 18:31:05.749702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.465 [2024-11-26 18:31:05.776390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.465 [2024-11-26 18:31:05.776460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:12.465 [2024-11-26 18:31:05.776481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.573 ms 00:27:12.465 [2024-11-26 18:31:05.776490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.465 [2024-11-26 18:31:05.776828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.465 [2024-11-26 18:31:05.776846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:12.465 [2024-11-26 18:31:05.776858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:27:12.465 [2024-11-26 18:31:05.776867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.727 [2024-11-26 18:31:05.819612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.727 [2024-11-26 18:31:05.819684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:12.727 [2024-11-26 18:31:05.819703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.784 ms 00:27:12.727 [2024-11-26 18:31:05.819711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.727 [2024-11-26 18:31:05.862549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.727 [2024-11-26 18:31:05.862745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:12.727 [2024-11-26 18:31:05.862771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.765 ms 00:27:12.727 [2024-11-26 18:31:05.862781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.727 [2024-11-26 18:31:05.903966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.727 [2024-11-26 18:31:05.904034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:12.727 [2024-11-26 18:31:05.904050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.085 ms 00:27:12.727 [2024-11-26 18:31:05.904058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.727 [2024-11-26 18:31:05.944970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.727 [2024-11-26 18:31:05.945036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:12.727 [2024-11-26 18:31:05.945053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.782 ms 00:27:12.727 [2024-11-26 18:31:05.945062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.727 [2024-11-26 18:31:05.945197] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:12.727 [2024-11-26 18:31:05.945215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.945992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.946000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.946013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.946021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:12.727 [2024-11-26 18:31:05.946033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:12.728 [2024-11-26 18:31:05.946321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:12.728 [2024-11-26 18:31:05.946334] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b 00:27:12.728 [2024-11-26 18:31:05.946343] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:12.728 [2024-11-26 18:31:05.946353] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:12.728 [2024-11-26 18:31:05.946365] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:12.728 [2024-11-26 18:31:05.946376] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:12.728 [2024-11-26 18:31:05.946385] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:12.728 [2024-11-26 18:31:05.946396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:12.728 [2024-11-26 18:31:05.946404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:12.728 [2024-11-26 18:31:05.946414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:12.728 [2024-11-26 18:31:05.946421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:12.728 [2024-11-26 18:31:05.946432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.728 [2024-11-26 18:31:05.946441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:12.728 [2024-11-26 18:31:05.946453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:27:12.728 [2024-11-26 18:31:05.946461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.728 [2024-11-26 18:31:05.969888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.728 [2024-11-26 18:31:05.969955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:12.728 [2024-11-26 18:31:05.969973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.407 ms 00:27:12.728 [2024-11-26 18:31:05.969980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.728 [2024-11-26 18:31:05.970676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.728 [2024-11-26 18:31:05.970689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:12.728 [2024-11-26 18:31:05.970701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:27:12.728 [2024-11-26 18:31:05.970710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.728 [2024-11-26 18:31:06.047257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.728 [2024-11-26 18:31:06.047319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:12.728 [2024-11-26 18:31:06.047334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.728 [2024-11-26 18:31:06.047359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.728 [2024-11-26 18:31:06.047532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.728 [2024-11-26 18:31:06.047547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:12.728 [2024-11-26 18:31:06.047558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.728 [2024-11-26 18:31:06.047567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.728 [2024-11-26 18:31:06.047664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.728 [2024-11-26 18:31:06.047680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:12.728 [2024-11-26 18:31:06.047715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.728 [2024-11-26 18:31:06.047723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.728 [2024-11-26 18:31:06.047761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.728 [2024-11-26 18:31:06.047770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:12.728 [2024-11-26 18:31:06.047781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.728 [2024-11-26 18:31:06.047790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-26 18:31:06.198874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.989 [2024-11-26 18:31:06.198947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:12.989 [2024-11-26 18:31:06.198973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.989 [2024-11-26 18:31:06.198988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-26 18:31:06.315850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.989 [2024-11-26 18:31:06.315927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:12.989 [2024-11-26 18:31:06.315942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.989 [2024-11-26 18:31:06.315951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-26 18:31:06.316084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.989 [2024-11-26 18:31:06.316095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:12.989 [2024-11-26 18:31:06.316112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.989 [2024-11-26 18:31:06.316120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-26 18:31:06.316172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.989 [2024-11-26 18:31:06.316181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:12.989 [2024-11-26 18:31:06.316191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.989 [2024-11-26 18:31:06.316198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-26 18:31:06.316347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.989 [2024-11-26 18:31:06.316358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:12.989 [2024-11-26 18:31:06.316369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.989 [2024-11-26 18:31:06.316378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-26 18:31:06.316438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.989 [2024-11-26 18:31:06.316449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:12.989 [2024-11-26 18:31:06.316459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.989 [2024-11-26 18:31:06.316466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-26 18:31:06.316522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.989 [2024-11-26 18:31:06.316531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:12.989 [2024-11-26 18:31:06.316543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.989 [2024-11-26 18:31:06.316553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-26 18:31:06.316610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.989 [2024-11-26 18:31:06.316647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:12.989 [2024-11-26 18:31:06.316657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.989 [2024-11-26 18:31:06.316665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-26 18:31:06.316875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 627.645 ms, result 0 00:27:13.250 true 00:27:13.250 18:31:06 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78990 00:27:13.250 18:31:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78990 ']' 00:27:13.250 18:31:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78990 00:27:13.250 18:31:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:13.250 18:31:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.250 18:31:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78990 00:27:13.250 killing process with pid 78990 00:27:13.250 18:31:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:13.250 18:31:06 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:13.250 18:31:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78990' 00:27:13.250 18:31:06 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78990 00:27:13.250 18:31:06 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78990 00:27:21.377 18:31:13 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:27:21.377 65536+0 records in 00:27:21.377 65536+0 records out 00:27:21.377 268435456 bytes (268 MB, 256 MiB) copied, 1.02501 s, 262 MB/s 00:27:21.377 18:31:14 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:21.377 [2024-11-26 18:31:14.380555] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:27:21.377 [2024-11-26 18:31:14.380712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79244 ] 00:27:21.377 [2024-11-26 18:31:14.545414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.377 [2024-11-26 18:31:14.676882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.946 [2024-11-26 18:31:15.072758] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:21.946 [2024-11-26 18:31:15.072842] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:21.946 [2024-11-26 18:31:15.234025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.946 [2024-11-26 18:31:15.234096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:21.946 [2024-11-26 18:31:15.234110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:21.946 [2024-11-26 18:31:15.234119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.946 [2024-11-26 18:31:15.237578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.946 [2024-11-26 18:31:15.237653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:21.946 [2024-11-26 18:31:15.237668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.425 ms 00:27:21.946 [2024-11-26 18:31:15.237676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.946 [2024-11-26 18:31:15.237853] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:21.946 [2024-11-26 18:31:15.239027] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:21.946 [2024-11-26 18:31:15.239073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.946 [2024-11-26 18:31:15.239084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:21.946 [2024-11-26 18:31:15.239095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:27:21.946 [2024-11-26 18:31:15.239104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.946 [2024-11-26 18:31:15.240788] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:21.946 [2024-11-26 18:31:15.263502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.946 [2024-11-26 18:31:15.263579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:21.946 [2024-11-26 18:31:15.263594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.756 ms 00:27:21.946 [2024-11-26 18:31:15.263602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.946 [2024-11-26 18:31:15.263867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.946 [2024-11-26 18:31:15.263883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:21.946 [2024-11-26 18:31:15.263894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:21.946 [2024-11-26 18:31:15.263903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.946 [2024-11-26 18:31:15.271625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.946 [2024-11-26 18:31:15.271677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:21.946 [2024-11-26 18:31:15.271689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.674 ms 00:27:21.946 [2024-11-26 18:31:15.271696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.946 [2024-11-26 18:31:15.271847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.946 [2024-11-26 18:31:15.271868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:21.946 [2024-11-26 18:31:15.271878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:21.946 [2024-11-26 18:31:15.271886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.946 [2024-11-26 18:31:15.271928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.946 [2024-11-26 18:31:15.271939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:21.946 [2024-11-26 18:31:15.271947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:21.946 [2024-11-26 18:31:15.271955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.946 [2024-11-26 18:31:15.271980] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:21.946 [2024-11-26 18:31:15.277502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.946 [2024-11-26 18:31:15.277559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:21.946 [2024-11-26 18:31:15.277572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.539 ms 00:27:21.946 [2024-11-26 18:31:15.277581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.946 [2024-11-26 18:31:15.277725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.946 [2024-11-26 18:31:15.277740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:21.946 [2024-11-26 18:31:15.277749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:21.946 [2024-11-26 18:31:15.277758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.946 [2024-11-26 18:31:15.277788] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:21.946 [2024-11-26 18:31:15.277811] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:21.946 [2024-11-26 18:31:15.277850] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:21.946 [2024-11-26 18:31:15.277866] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:22.205 [2024-11-26 18:31:15.277966] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:22.205 [2024-11-26 18:31:15.277984] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:22.205 [2024-11-26 18:31:15.277997] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:22.205 [2024-11-26 18:31:15.278012] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:22.205 [2024-11-26 18:31:15.278022] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:22.205 [2024-11-26 18:31:15.278031] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:22.206 [2024-11-26 18:31:15.278040] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:22.206 [2024-11-26 18:31:15.278064] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:22.206 [2024-11-26 18:31:15.278073] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:22.206 [2024-11-26 18:31:15.278082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.206 [2024-11-26 18:31:15.278091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:22.206 [2024-11-26 18:31:15.278100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:27:22.206 [2024-11-26 18:31:15.278109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.206 [2024-11-26 18:31:15.278199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.206 [2024-11-26 18:31:15.278213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:22.206 [2024-11-26 18:31:15.278222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:22.206 [2024-11-26 18:31:15.278231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.206 [2024-11-26 18:31:15.278341] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:22.206 [2024-11-26 18:31:15.278354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:22.206 [2024-11-26 18:31:15.278363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.206 [2024-11-26 18:31:15.278373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:22.206 [2024-11-26 18:31:15.278392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:22.206 [2024-11-26 18:31:15.278409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:22.206 [2024-11-26 18:31:15.278417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.206 [2024-11-26 18:31:15.278435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:22.206 [2024-11-26 18:31:15.278465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:22.206 [2024-11-26 18:31:15.278473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.206 [2024-11-26 18:31:15.278482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:22.206 [2024-11-26 18:31:15.278491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:22.206 [2024-11-26 18:31:15.278498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:22.206 [2024-11-26 18:31:15.278515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:22.206 [2024-11-26 18:31:15.278523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:22.206 [2024-11-26 18:31:15.278540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.206 [2024-11-26 18:31:15.278557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:22.206 [2024-11-26 18:31:15.278565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.206 [2024-11-26 18:31:15.278581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:22.206 [2024-11-26 18:31:15.278589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.206 [2024-11-26 18:31:15.278604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:22.206 [2024-11-26 18:31:15.278612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.206 [2024-11-26 18:31:15.278627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:22.206 [2024-11-26 18:31:15.278649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.206 [2024-11-26 18:31:15.278665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:22.206 [2024-11-26 18:31:15.278674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:22.206 [2024-11-26 18:31:15.278681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.206 [2024-11-26 18:31:15.278689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:22.206 [2024-11-26 18:31:15.278697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:22.206 [2024-11-26 18:31:15.278706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:22.206 [2024-11-26 18:31:15.278722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:22.206 [2024-11-26 18:31:15.278731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278739] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:22.206 [2024-11-26 18:31:15.278748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:22.206 [2024-11-26 18:31:15.278761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.206 [2024-11-26 18:31:15.278769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.206 [2024-11-26 18:31:15.278779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:22.206 [2024-11-26 18:31:15.278788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:22.206 [2024-11-26 18:31:15.278796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:22.206 [2024-11-26 18:31:15.278804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:22.206 [2024-11-26 18:31:15.278813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:22.206 [2024-11-26 18:31:15.278821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:22.206 [2024-11-26 18:31:15.278831] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:22.206 [2024-11-26 18:31:15.278843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.206 [2024-11-26 18:31:15.278853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:22.206 [2024-11-26 18:31:15.278862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:22.206 [2024-11-26 18:31:15.278871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:22.206 [2024-11-26 18:31:15.278880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:22.206 [2024-11-26 18:31:15.278889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:22.206 [2024-11-26 18:31:15.278898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:22.206 [2024-11-26 18:31:15.278907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:22.206 [2024-11-26 18:31:15.278916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:22.206 [2024-11-26 18:31:15.278925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:22.206 [2024-11-26 18:31:15.278933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:22.206 [2024-11-26 18:31:15.278941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:22.206 [2024-11-26 18:31:15.278949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:22.206 [2024-11-26 18:31:15.278957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:22.206 [2024-11-26 18:31:15.278966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:22.206 [2024-11-26 18:31:15.278974] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:22.206 [2024-11-26 18:31:15.278984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.206 [2024-11-26 18:31:15.278993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:22.206 [2024-11-26 18:31:15.279001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:22.206 [2024-11-26 18:31:15.279010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:22.206 [2024-11-26 18:31:15.279019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:22.206 [2024-11-26 18:31:15.279029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.206 [2024-11-26 18:31:15.279041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:22.206 [2024-11-26 18:31:15.279050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:27:22.206 [2024-11-26 18:31:15.279059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.321785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.321967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.207 [2024-11-26 18:31:15.322005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.730 ms 00:27:22.207 [2024-11-26 18:31:15.322030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.322229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.322303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:22.207 [2024-11-26 18:31:15.322351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:22.207 [2024-11-26 18:31:15.322375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.380321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.380490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.207 [2024-11-26 18:31:15.380538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.000 ms 00:27:22.207 [2024-11-26 18:31:15.380562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.380762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.380817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.207 [2024-11-26 18:31:15.380856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:22.207 [2024-11-26 18:31:15.380889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.381391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.381441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.207 [2024-11-26 18:31:15.381489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:27:22.207 [2024-11-26 18:31:15.381521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.381706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.381756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.207 [2024-11-26 18:31:15.381792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:27:22.207 [2024-11-26 18:31:15.381822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.402880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.403035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.207 [2024-11-26 18:31:15.403071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.034 ms 00:27:22.207 [2024-11-26 18:31:15.403097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.424763] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:22.207 [2024-11-26 18:31:15.424948] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:22.207 [2024-11-26 18:31:15.424997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.425021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:22.207 [2024-11-26 18:31:15.425047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.766 ms 00:27:22.207 [2024-11-26 18:31:15.425083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.459901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.460134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:22.207 [2024-11-26 18:31:15.460173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.688 ms 00:27:22.207 [2024-11-26 18:31:15.460198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.482392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.482568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:22.207 [2024-11-26 18:31:15.482603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.039 ms 00:27:22.207 [2024-11-26 18:31:15.482647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.504082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.504266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:22.207 [2024-11-26 18:31:15.504302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.249 ms 00:27:22.207 [2024-11-26 18:31:15.504325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.207 [2024-11-26 18:31:15.505386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.207 [2024-11-26 18:31:15.505467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:22.207 [2024-11-26 18:31:15.505504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:27:22.207 [2024-11-26 18:31:15.505544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.466 [2024-11-26 18:31:15.602542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.466 [2024-11-26 18:31:15.602757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:22.466 [2024-11-26 18:31:15.602781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.126 ms 00:27:22.466 [2024-11-26 18:31:15.602791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.466 [2024-11-26 18:31:15.618572] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:22.466 [2024-11-26 18:31:15.637030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.466 [2024-11-26 18:31:15.637103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:22.466 [2024-11-26 18:31:15.637120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.090 ms 00:27:22.466 [2024-11-26 18:31:15.637129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.466 [2024-11-26 18:31:15.637268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.466 [2024-11-26 18:31:15.637282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:22.466 [2024-11-26 18:31:15.637292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:22.466 [2024-11-26 18:31:15.637300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.466 [2024-11-26 18:31:15.637359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.466 [2024-11-26 18:31:15.637370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:22.466 [2024-11-26 18:31:15.637379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:22.466 [2024-11-26 18:31:15.637387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.466 [2024-11-26 18:31:15.637429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.466 [2024-11-26 18:31:15.637448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:22.466 [2024-11-26 18:31:15.637457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:22.466 [2024-11-26 18:31:15.637466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.466 [2024-11-26 18:31:15.637504] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:22.466 [2024-11-26 18:31:15.637515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.466 [2024-11-26 18:31:15.637524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:22.466 [2024-11-26 18:31:15.637533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:22.466 [2024-11-26 18:31:15.637542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.466 [2024-11-26 18:31:15.682611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.466 [2024-11-26 18:31:15.682794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:22.466 [2024-11-26 18:31:15.682816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.129 ms 00:27:22.466 [2024-11-26 18:31:15.682843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.466 [2024-11-26 18:31:15.683072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.466 [2024-11-26 18:31:15.683088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:22.466 [2024-11-26 18:31:15.683098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:22.466 [2024-11-26 18:31:15.683106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.466 [2024-11-26 18:31:15.684235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:22.466 [2024-11-26 18:31:15.689928] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 450.725 ms, result 0 00:27:22.466 [2024-11-26 18:31:15.690926] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:22.466 [2024-11-26 18:31:15.712989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:23.401  [2024-11-26T18:31:18.114Z] Copying: 27/256 [MB] (27 MBps) [2024-11-26T18:31:19.049Z] Copying: 52/256 [MB] (25 MBps) [2024-11-26T18:31:19.984Z] Copying: 77/256 [MB] (25 MBps) [2024-11-26T18:31:20.921Z] Copying: 104/256 [MB] (26 MBps) [2024-11-26T18:31:21.859Z] Copying: 133/256 [MB] (28 MBps) [2024-11-26T18:31:22.796Z] Copying: 162/256 [MB] (29 MBps) [2024-11-26T18:31:23.730Z] Copying: 191/256 [MB] (29 MBps) [2024-11-26T18:31:25.106Z] Copying: 217/256 [MB] (25 MBps) [2024-11-26T18:31:25.365Z] Copying: 243/256 [MB] (26 MBps) [2024-11-26T18:31:25.365Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-26 18:31:25.188249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:32.030 [2024-11-26 18:31:25.203164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.030 [2024-11-26 18:31:25.203269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:32.031 [2024-11-26 18:31:25.203316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:32.031 [2024-11-26 18:31:25.203341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.031 [2024-11-26 18:31:25.203381] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:32.031 [2024-11-26 18:31:25.207894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.031 [2024-11-26 18:31:25.207975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:32.031 [2024-11-26 18:31:25.208004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.485 ms 00:27:32.031 [2024-11-26 18:31:25.208023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.031 [2024-11-26 18:31:25.210114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.031 [2024-11-26 18:31:25.210202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:32.031 [2024-11-26 18:31:25.210233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.058 ms 00:27:32.031 [2024-11-26 18:31:25.210257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.031 [2024-11-26 18:31:25.217407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.031 [2024-11-26 18:31:25.217483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:32.031 [2024-11-26 18:31:25.217511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.130 ms 00:27:32.031 [2024-11-26 18:31:25.217532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.031 [2024-11-26 18:31:25.223152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.031 [2024-11-26 18:31:25.223236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:32.031 [2024-11-26 18:31:25.223263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.583 ms 00:27:32.031 [2024-11-26 18:31:25.223282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.031 [2024-11-26 18:31:25.258906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.031 [2024-11-26 18:31:25.259015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:32.031 [2024-11-26 18:31:25.259061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.622 ms 00:27:32.031 [2024-11-26 18:31:25.259082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.031 [2024-11-26 18:31:25.284434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.031 [2024-11-26 18:31:25.284548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:32.031 [2024-11-26 18:31:25.284581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.326 ms 00:27:32.031 [2024-11-26 18:31:25.284601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.031 [2024-11-26 18:31:25.284781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.031 [2024-11-26 18:31:25.284821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:32.031 [2024-11-26 18:31:25.284846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:32.031 [2024-11-26 18:31:25.284892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.031 [2024-11-26 18:31:25.321029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.031 [2024-11-26 18:31:25.321138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:32.031 [2024-11-26 18:31:25.321168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.155 ms 00:27:32.031 [2024-11-26 18:31:25.321187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.031 [2024-11-26 18:31:25.357973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.031 [2024-11-26 18:31:25.358101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:32.031 [2024-11-26 18:31:25.358148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.782 ms 00:27:32.031 [2024-11-26 18:31:25.358168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.291 [2024-11-26 18:31:25.394386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.291 [2024-11-26 18:31:25.394490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:32.291 [2024-11-26 18:31:25.394539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.206 ms 00:27:32.291 [2024-11-26 18:31:25.394562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.291 [2024-11-26 18:31:25.430754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.291 [2024-11-26 18:31:25.430869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:32.291 [2024-11-26 18:31:25.430898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.142 ms 00:27:32.291 [2024-11-26 18:31:25.430917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.291 [2024-11-26 18:31:25.430982] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:32.291 [2024-11-26 18:31:25.431037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.431974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.432972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.433012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:32.291 [2024-11-26 18:31:25.433053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:32.292 [2024-11-26 18:31:25.433715] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:32.292 [2024-11-26 18:31:25.433723] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b 00:27:32.292 [2024-11-26 18:31:25.433731] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:32.292 [2024-11-26 18:31:25.433739] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:32.292 [2024-11-26 18:31:25.433746] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:32.292 [2024-11-26 18:31:25.433754] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:32.292 [2024-11-26 18:31:25.433762] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:32.292 [2024-11-26 18:31:25.433770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:32.292 [2024-11-26 18:31:25.433777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:32.292 [2024-11-26 18:31:25.433784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:32.292 [2024-11-26 18:31:25.433790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:32.292 [2024-11-26 18:31:25.433798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.292 [2024-11-26 18:31:25.433809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:32.292 [2024-11-26 18:31:25.433818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:27:32.292 [2024-11-26 18:31:25.433825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.292 [2024-11-26 18:31:25.453862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.292 [2024-11-26 18:31:25.453899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:32.292 [2024-11-26 18:31:25.453910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.051 ms 00:27:32.292 [2024-11-26 18:31:25.453917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.292 [2024-11-26 18:31:25.454419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.292 [2024-11-26 18:31:25.454428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:32.292 [2024-11-26 18:31:25.454436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:27:32.292 [2024-11-26 18:31:25.454443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.292 [2024-11-26 18:31:25.508281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.292 [2024-11-26 18:31:25.508416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.292 [2024-11-26 18:31:25.508433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.292 [2024-11-26 18:31:25.508441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.292 [2024-11-26 18:31:25.508552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.292 [2024-11-26 18:31:25.508561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.292 [2024-11-26 18:31:25.508571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.292 [2024-11-26 18:31:25.508578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.292 [2024-11-26 18:31:25.508653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.292 [2024-11-26 18:31:25.508666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.292 [2024-11-26 18:31:25.508675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.292 [2024-11-26 18:31:25.508683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.292 [2024-11-26 18:31:25.508702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.292 [2024-11-26 18:31:25.508714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.292 [2024-11-26 18:31:25.508722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.292 [2024-11-26 18:31:25.508736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.551 [2024-11-26 18:31:25.636287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.551 [2024-11-26 18:31:25.636460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.551 [2024-11-26 18:31:25.636478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.551 [2024-11-26 18:31:25.636487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.551 [2024-11-26 18:31:25.737824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.551 [2024-11-26 18:31:25.737883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.551 [2024-11-26 18:31:25.737897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.551 [2024-11-26 18:31:25.737920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.551 [2024-11-26 18:31:25.738018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.551 [2024-11-26 18:31:25.738028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.551 [2024-11-26 18:31:25.738036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.551 [2024-11-26 18:31:25.738044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.551 [2024-11-26 18:31:25.738072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.551 [2024-11-26 18:31:25.738081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.551 [2024-11-26 18:31:25.738092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.551 [2024-11-26 18:31:25.738099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.551 [2024-11-26 18:31:25.738214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.551 [2024-11-26 18:31:25.738225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.551 [2024-11-26 18:31:25.738232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.551 [2024-11-26 18:31:25.738240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.551 [2024-11-26 18:31:25.738273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.551 [2024-11-26 18:31:25.738282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:32.551 [2024-11-26 18:31:25.738290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.551 [2024-11-26 18:31:25.738300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.551 [2024-11-26 18:31:25.738338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.551 [2024-11-26 18:31:25.738345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.551 [2024-11-26 18:31:25.738353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.551 [2024-11-26 18:31:25.738360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.551 [2024-11-26 18:31:25.738402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.551 [2024-11-26 18:31:25.738410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.551 [2024-11-26 18:31:25.738421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.551 [2024-11-26 18:31:25.738428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.551 [2024-11-26 18:31:25.738557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.448 ms, result 0 00:27:33.929 00:27:33.929 00:27:33.930 18:31:27 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:27:33.930 18:31:27 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79379 00:27:33.930 18:31:27 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79379 00:27:33.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.930 18:31:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79379 ']' 00:27:33.930 18:31:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.930 18:31:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.930 18:31:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.930 18:31:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.930 18:31:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:33.930 [2024-11-26 18:31:27.204024] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:27:33.930 [2024-11-26 18:31:27.204215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79379 ] 00:27:34.189 [2024-11-26 18:31:27.379223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.189 [2024-11-26 18:31:27.505485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.569 18:31:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:35.569 18:31:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:35.569 18:31:28 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:27:35.569 [2024-11-26 18:31:28.727494] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:35.569 [2024-11-26 18:31:28.727705] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:35.828 [2024-11-26 18:31:28.905557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.828 [2024-11-26 18:31:28.905741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:35.828 [2024-11-26 18:31:28.905783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:35.828 [2024-11-26 18:31:28.905809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.828 [2024-11-26 18:31:28.909250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.828 [2024-11-26 18:31:28.909380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:35.828 [2024-11-26 18:31:28.909402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.405 ms 00:27:35.828 [2024-11-26 18:31:28.909411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.828 [2024-11-26 18:31:28.909581] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:35.828 [2024-11-26 18:31:28.910900] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:35.828 [2024-11-26 18:31:28.910939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.828 [2024-11-26 18:31:28.910950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:35.828 [2024-11-26 18:31:28.910963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:27:35.829 [2024-11-26 18:31:28.910975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.829 [2024-11-26 18:31:28.912608] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:35.829 [2024-11-26 18:31:28.935221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.829 [2024-11-26 18:31:28.935317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:35.829 [2024-11-26 18:31:28.935335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.660 ms 00:27:35.829 [2024-11-26 18:31:28.935347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.829 [2024-11-26 18:31:28.935555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.829 [2024-11-26 18:31:28.935572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:35.829 [2024-11-26 18:31:28.935582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:35.829 [2024-11-26 18:31:28.935593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.829 [2024-11-26 18:31:28.943552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.829 [2024-11-26 18:31:28.943641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:35.829 [2024-11-26 18:31:28.943656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.848 ms 00:27:35.829 [2024-11-26 18:31:28.943666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.829 [2024-11-26 18:31:28.943815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.829 [2024-11-26 18:31:28.943832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:35.829 [2024-11-26 18:31:28.943842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:35.829 [2024-11-26 18:31:28.943857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.829 [2024-11-26 18:31:28.943892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.829 [2024-11-26 18:31:28.943904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:35.829 [2024-11-26 18:31:28.943912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:35.829 [2024-11-26 18:31:28.943922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.829 [2024-11-26 18:31:28.943951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:35.829 [2024-11-26 18:31:28.949514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.829 [2024-11-26 18:31:28.949564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:35.829 [2024-11-26 18:31:28.949580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.577 ms 00:27:35.829 [2024-11-26 18:31:28.949589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.829 [2024-11-26 18:31:28.949710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.829 [2024-11-26 18:31:28.949723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:35.829 [2024-11-26 18:31:28.949739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:35.829 [2024-11-26 18:31:28.949768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.829 [2024-11-26 18:31:28.949795] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:35.829 [2024-11-26 18:31:28.949817] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:35.829 [2024-11-26 18:31:28.949870] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:35.829 [2024-11-26 18:31:28.949892] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:35.829 [2024-11-26 18:31:28.949994] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:35.829 [2024-11-26 18:31:28.950007] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:35.829 [2024-11-26 18:31:28.950026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:35.829 [2024-11-26 18:31:28.950037] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:35.829 [2024-11-26 18:31:28.950049] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:35.829 [2024-11-26 18:31:28.950058] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:35.829 [2024-11-26 18:31:28.950068] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:35.829 [2024-11-26 18:31:28.950077] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:35.829 [2024-11-26 18:31:28.950089] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:35.829 [2024-11-26 18:31:28.950099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.829 [2024-11-26 18:31:28.950110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:35.829 [2024-11-26 18:31:28.950119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:27:35.829 [2024-11-26 18:31:28.950132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.829 [2024-11-26 18:31:28.950244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.829 [2024-11-26 18:31:28.950257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:35.829 [2024-11-26 18:31:28.950267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:35.829 [2024-11-26 18:31:28.950277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.829 [2024-11-26 18:31:28.950385] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:35.829 [2024-11-26 18:31:28.950405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:35.829 [2024-11-26 18:31:28.950415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:35.829 [2024-11-26 18:31:28.950426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.829 [2024-11-26 18:31:28.950435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:35.829 [2024-11-26 18:31:28.950445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:35.829 [2024-11-26 18:31:28.950453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:35.829 [2024-11-26 18:31:28.950468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:35.829 [2024-11-26 18:31:28.950477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:35.829 [2024-11-26 18:31:28.950487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:35.829 [2024-11-26 18:31:28.950496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:35.829 [2024-11-26 18:31:28.950506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:35.829 [2024-11-26 18:31:28.950514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:35.829 [2024-11-26 18:31:28.950524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:35.829 [2024-11-26 18:31:28.950532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:35.829 [2024-11-26 18:31:28.950542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.829 [2024-11-26 18:31:28.950550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:35.829 [2024-11-26 18:31:28.950560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:35.829 [2024-11-26 18:31:28.950579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.829 [2024-11-26 18:31:28.950590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:35.829 [2024-11-26 18:31:28.950598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:35.829 [2024-11-26 18:31:28.950608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.829 [2024-11-26 18:31:28.950617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:35.829 [2024-11-26 18:31:28.950629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:35.829 [2024-11-26 18:31:28.950649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.829 [2024-11-26 18:31:28.950661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:35.829 [2024-11-26 18:31:28.950669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:35.829 [2024-11-26 18:31:28.950679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.829 [2024-11-26 18:31:28.950687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:35.829 [2024-11-26 18:31:28.950697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:35.830 [2024-11-26 18:31:28.950705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.830 [2024-11-26 18:31:28.950715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:35.830 [2024-11-26 18:31:28.950723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:35.830 [2024-11-26 18:31:28.950734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:35.830 [2024-11-26 18:31:28.950743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:35.830 [2024-11-26 18:31:28.950753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:35.830 [2024-11-26 18:31:28.950761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:35.830 [2024-11-26 18:31:28.950778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:35.830 [2024-11-26 18:31:28.950788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:35.830 [2024-11-26 18:31:28.950804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.830 [2024-11-26 18:31:28.950813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:35.830 [2024-11-26 18:31:28.950839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:35.830 [2024-11-26 18:31:28.950849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.830 [2024-11-26 18:31:28.950861] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:35.830 [2024-11-26 18:31:28.950879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:35.830 [2024-11-26 18:31:28.950892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:35.830 [2024-11-26 18:31:28.950901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.830 [2024-11-26 18:31:28.950915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:35.830 [2024-11-26 18:31:28.950924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:35.830 [2024-11-26 18:31:28.950937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:35.830 [2024-11-26 18:31:28.950946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:35.830 [2024-11-26 18:31:28.950958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:35.830 [2024-11-26 18:31:28.950966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:35.830 [2024-11-26 18:31:28.950980] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:35.830 [2024-11-26 18:31:28.950991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:35.830 [2024-11-26 18:31:28.951010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:35.830 [2024-11-26 18:31:28.951020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:35.830 [2024-11-26 18:31:28.951035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:35.830 [2024-11-26 18:31:28.951044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:35.830 [2024-11-26 18:31:28.951057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:35.830 [2024-11-26 18:31:28.951066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:35.830 [2024-11-26 18:31:28.951080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:35.830 [2024-11-26 18:31:28.951090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:35.830 [2024-11-26 18:31:28.951104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:35.830 [2024-11-26 18:31:28.951113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:35.830 [2024-11-26 18:31:28.951128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:35.830 [2024-11-26 18:31:28.951137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:35.830 [2024-11-26 18:31:28.951151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:35.830 [2024-11-26 18:31:28.951161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:35.830 [2024-11-26 18:31:28.951174] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:35.830 [2024-11-26 18:31:28.951184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:35.830 [2024-11-26 18:31:28.951201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:35.830 [2024-11-26 18:31:28.951211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:35.830 [2024-11-26 18:31:28.951224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:35.830 [2024-11-26 18:31:28.951233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:35.830 [2024-11-26 18:31:28.951247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.830 [2024-11-26 18:31:28.951257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:35.830 [2024-11-26 18:31:28.951269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:27:35.830 [2024-11-26 18:31:28.951280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.830 [2024-11-26 18:31:28.994004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.830 [2024-11-26 18:31:28.994065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:35.830 [2024-11-26 18:31:28.994084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.715 ms 00:27:35.830 [2024-11-26 18:31:28.994115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.830 [2024-11-26 18:31:28.994305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.830 [2024-11-26 18:31:28.994318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:35.830 [2024-11-26 18:31:28.994333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:35.830 [2024-11-26 18:31:28.994343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.830 [2024-11-26 18:31:29.044872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.830 [2024-11-26 18:31:29.044946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:35.830 [2024-11-26 18:31:29.044963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.595 ms 00:27:35.830 [2024-11-26 18:31:29.044972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.830 [2024-11-26 18:31:29.045110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.830 [2024-11-26 18:31:29.045121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:35.830 [2024-11-26 18:31:29.045133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:35.830 [2024-11-26 18:31:29.045142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.830 [2024-11-26 18:31:29.045597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.830 [2024-11-26 18:31:29.045613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:35.830 [2024-11-26 18:31:29.045653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:27:35.830 [2024-11-26 18:31:29.045662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.830 [2024-11-26 18:31:29.045798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.830 [2024-11-26 18:31:29.045815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:35.830 [2024-11-26 18:31:29.045827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:27:35.830 [2024-11-26 18:31:29.045835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.830 [2024-11-26 18:31:29.068031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.830 [2024-11-26 18:31:29.068201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:35.830 [2024-11-26 18:31:29.068224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.205 ms 00:27:35.830 [2024-11-26 18:31:29.068234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.830 [2024-11-26 18:31:29.099153] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:35.830 [2024-11-26 18:31:29.099315] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:35.830 [2024-11-26 18:31:29.099341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.831 [2024-11-26 18:31:29.099351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:35.831 [2024-11-26 18:31:29.099366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.992 ms 00:27:35.831 [2024-11-26 18:31:29.099389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.831 [2024-11-26 18:31:29.135284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.831 [2024-11-26 18:31:29.135353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:35.831 [2024-11-26 18:31:29.135370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.763 ms 00:27:35.831 [2024-11-26 18:31:29.135396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.831 [2024-11-26 18:31:29.155903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.831 [2024-11-26 18:31:29.155978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:35.831 [2024-11-26 18:31:29.155997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.362 ms 00:27:35.831 [2024-11-26 18:31:29.156004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.089 [2024-11-26 18:31:29.176033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.089 [2024-11-26 18:31:29.176128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:36.089 [2024-11-26 18:31:29.176146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.880 ms 00:27:36.090 [2024-11-26 18:31:29.176153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.090 [2024-11-26 18:31:29.177097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.090 [2024-11-26 18:31:29.177127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:36.090 [2024-11-26 18:31:29.177140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:27:36.090 [2024-11-26 18:31:29.177149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.090 [2024-11-26 18:31:29.266844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.090 [2024-11-26 18:31:29.266914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:36.090 [2024-11-26 18:31:29.266931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.819 ms 00:27:36.090 [2024-11-26 18:31:29.266939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.090 [2024-11-26 18:31:29.280813] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:36.090 [2024-11-26 18:31:29.298356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.090 [2024-11-26 18:31:29.298534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:36.090 [2024-11-26 18:31:29.298553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.293 ms 00:27:36.090 [2024-11-26 18:31:29.298563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.090 [2024-11-26 18:31:29.298719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.090 [2024-11-26 18:31:29.298735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:36.090 [2024-11-26 18:31:29.298745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:36.090 [2024-11-26 18:31:29.298756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.090 [2024-11-26 18:31:29.298816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.090 [2024-11-26 18:31:29.298828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:36.090 [2024-11-26 18:31:29.298837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:36.090 [2024-11-26 18:31:29.298851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.090 [2024-11-26 18:31:29.298877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.090 [2024-11-26 18:31:29.298888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:36.090 [2024-11-26 18:31:29.298897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:36.090 [2024-11-26 18:31:29.298906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.090 [2024-11-26 18:31:29.298947] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:36.090 [2024-11-26 18:31:29.298962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.090 [2024-11-26 18:31:29.298974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:36.090 [2024-11-26 18:31:29.298985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:36.090 [2024-11-26 18:31:29.298996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.090 [2024-11-26 18:31:29.339495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.090 [2024-11-26 18:31:29.339566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:36.090 [2024-11-26 18:31:29.339584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.538 ms 00:27:36.090 [2024-11-26 18:31:29.339609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.090 [2024-11-26 18:31:29.339835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.090 [2024-11-26 18:31:29.339848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:36.090 [2024-11-26 18:31:29.339862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:36.090 [2024-11-26 18:31:29.339870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.090 [2024-11-26 18:31:29.340948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:36.090 [2024-11-26 18:31:29.346026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 435.894 ms, result 0 00:27:36.090 [2024-11-26 18:31:29.347309] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:36.090 Some configs were skipped because the RPC state that can call them passed over. 00:27:36.090 18:31:29 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:27:36.348 [2024-11-26 18:31:29.604489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.348 [2024-11-26 18:31:29.604688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:36.348 [2024-11-26 18:31:29.604754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.624 ms 00:27:36.348 [2024-11-26 18:31:29.604790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.348 [2024-11-26 18:31:29.604869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.013 ms, result 0 00:27:36.348 true 00:27:36.348 18:31:29 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:27:36.607 [2024-11-26 18:31:29.837082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.607 [2024-11-26 18:31:29.837148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:36.607 [2024-11-26 18:31:29.837167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:27:36.607 [2024-11-26 18:31:29.837176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.607 [2024-11-26 18:31:29.837218] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.429 ms, result 0 00:27:36.607 true 00:27:36.607 18:31:29 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79379 00:27:36.607 18:31:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79379 ']' 00:27:36.607 18:31:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79379 00:27:36.607 18:31:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:36.607 18:31:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.607 18:31:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79379 00:27:36.607 killing process with pid 79379 00:27:36.607 18:31:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:36.607 18:31:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:36.607 18:31:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79379' 00:27:36.607 18:31:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79379 00:27:36.607 18:31:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79379 00:27:37.988 [2024-11-26 18:31:31.016028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.988 [2024-11-26 18:31:31.016097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:37.988 [2024-11-26 18:31:31.016111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:37.988 [2024-11-26 18:31:31.016137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.988 [2024-11-26 18:31:31.016161] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:37.988 [2024-11-26 18:31:31.020455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.988 [2024-11-26 18:31:31.020498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:37.988 [2024-11-26 18:31:31.020516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.279 ms 00:27:37.988 [2024-11-26 18:31:31.020524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.988 [2024-11-26 18:31:31.020831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.988 [2024-11-26 18:31:31.020844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:37.988 [2024-11-26 18:31:31.020855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:27:37.988 [2024-11-26 18:31:31.020863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.988 [2024-11-26 18:31:31.026171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.988 [2024-11-26 18:31:31.026219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:37.988 [2024-11-26 18:31:31.026234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.292 ms 00:27:37.988 [2024-11-26 18:31:31.026243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.988 [2024-11-26 18:31:31.032270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.988 [2024-11-26 18:31:31.032307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:37.988 [2024-11-26 18:31:31.032320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.990 ms 00:27:37.988 [2024-11-26 18:31:31.032327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.988 [2024-11-26 18:31:31.048540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.988 [2024-11-26 18:31:31.048627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:37.988 [2024-11-26 18:31:31.048649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.132 ms 00:27:37.988 [2024-11-26 18:31:31.048657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.988 [2024-11-26 18:31:31.060029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.988 [2024-11-26 18:31:31.060104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:37.989 [2024-11-26 18:31:31.060120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.280 ms 00:27:37.989 [2024-11-26 18:31:31.060128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.989 [2024-11-26 18:31:31.060279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.989 [2024-11-26 18:31:31.060290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:37.989 [2024-11-26 18:31:31.060301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:27:37.989 [2024-11-26 18:31:31.060309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.989 [2024-11-26 18:31:31.076444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.989 [2024-11-26 18:31:31.076509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:37.989 [2024-11-26 18:31:31.076524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.135 ms 00:27:37.989 [2024-11-26 18:31:31.076531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.989 [2024-11-26 18:31:31.092195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.989 [2024-11-26 18:31:31.092258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:37.989 [2024-11-26 18:31:31.092278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.602 ms 00:27:37.989 [2024-11-26 18:31:31.092286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.989 [2024-11-26 18:31:31.108290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.989 [2024-11-26 18:31:31.108355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:37.989 [2024-11-26 18:31:31.108372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.959 ms 00:27:37.989 [2024-11-26 18:31:31.108380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.989 [2024-11-26 18:31:31.124811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.989 [2024-11-26 18:31:31.124873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:37.989 [2024-11-26 18:31:31.124889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.365 ms 00:27:37.989 [2024-11-26 18:31:31.124897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.989 [2024-11-26 18:31:31.124954] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:37.989 [2024-11-26 18:31:31.124971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.124987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.124996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:37.989 [2024-11-26 18:31:31.125585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.125995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.126005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.126014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.126021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.126031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.126038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.126047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.126055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.126064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:37.990 [2024-11-26 18:31:31.126098] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:37.990 [2024-11-26 18:31:31.126111] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b 00:27:37.990 [2024-11-26 18:31:31.126121] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:37.990 [2024-11-26 18:31:31.126130] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:37.990 [2024-11-26 18:31:31.126137] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:37.990 [2024-11-26 18:31:31.126147] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:37.990 [2024-11-26 18:31:31.126154] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:37.990 [2024-11-26 18:31:31.126163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:37.990 [2024-11-26 18:31:31.126171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:37.990 [2024-11-26 18:31:31.126179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:37.990 [2024-11-26 18:31:31.126185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:37.990 [2024-11-26 18:31:31.126195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.990 [2024-11-26 18:31:31.126202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:37.990 [2024-11-26 18:31:31.126212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:27:37.990 [2024-11-26 18:31:31.126222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.990 [2024-11-26 18:31:31.147393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.990 [2024-11-26 18:31:31.147548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:37.990 [2024-11-26 18:31:31.147575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.173 ms 00:27:37.990 [2024-11-26 18:31:31.147585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.990 [2024-11-26 18:31:31.148333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.990 [2024-11-26 18:31:31.148350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:37.990 [2024-11-26 18:31:31.148366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:27:37.990 [2024-11-26 18:31:31.148374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.990 [2024-11-26 18:31:31.223491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.990 [2024-11-26 18:31:31.223557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:37.990 [2024-11-26 18:31:31.223573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.990 [2024-11-26 18:31:31.223581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.990 [2024-11-26 18:31:31.223760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.990 [2024-11-26 18:31:31.223772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:37.990 [2024-11-26 18:31:31.223785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.990 [2024-11-26 18:31:31.223792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.990 [2024-11-26 18:31:31.223854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.990 [2024-11-26 18:31:31.223865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:37.990 [2024-11-26 18:31:31.223878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.990 [2024-11-26 18:31:31.223902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.990 [2024-11-26 18:31:31.223923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:37.990 [2024-11-26 18:31:31.223933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:37.990 [2024-11-26 18:31:31.223943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:37.990 [2024-11-26 18:31:31.223953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.249 [2024-11-26 18:31:31.349934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.249 [2024-11-26 18:31:31.350013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:38.249 [2024-11-26 18:31:31.350030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.249 [2024-11-26 18:31:31.350038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.249 [2024-11-26 18:31:31.457524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.249 [2024-11-26 18:31:31.457597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:38.249 [2024-11-26 18:31:31.457643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.249 [2024-11-26 18:31:31.457652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.249 [2024-11-26 18:31:31.457790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.249 [2024-11-26 18:31:31.457801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:38.249 [2024-11-26 18:31:31.457817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.249 [2024-11-26 18:31:31.457825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.249 [2024-11-26 18:31:31.457859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.249 [2024-11-26 18:31:31.457868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:38.249 [2024-11-26 18:31:31.457879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.249 [2024-11-26 18:31:31.457887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.249 [2024-11-26 18:31:31.458037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.249 [2024-11-26 18:31:31.458051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:38.249 [2024-11-26 18:31:31.458062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.249 [2024-11-26 18:31:31.458071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.250 [2024-11-26 18:31:31.458114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.250 [2024-11-26 18:31:31.458125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:38.250 [2024-11-26 18:31:31.458137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.250 [2024-11-26 18:31:31.458145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.250 [2024-11-26 18:31:31.458193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.250 [2024-11-26 18:31:31.458209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:38.250 [2024-11-26 18:31:31.458223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.250 [2024-11-26 18:31:31.458232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.250 [2024-11-26 18:31:31.458281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:38.250 [2024-11-26 18:31:31.458291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:38.250 [2024-11-26 18:31:31.458302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:38.250 [2024-11-26 18:31:31.458310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.250 [2024-11-26 18:31:31.458462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 443.262 ms, result 0 00:27:39.223 18:31:32 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:39.223 18:31:32 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:39.482 [2024-11-26 18:31:32.592711] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:27:39.482 [2024-11-26 18:31:32.592934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79444 ] 00:27:39.482 [2024-11-26 18:31:32.766672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.741 [2024-11-26 18:31:32.882248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.002 [2024-11-26 18:31:33.271881] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:40.002 [2024-11-26 18:31:33.272038] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:40.264 [2024-11-26 18:31:33.432580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.432676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:40.264 [2024-11-26 18:31:33.432691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:40.264 [2024-11-26 18:31:33.432700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.436095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.436140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:40.264 [2024-11-26 18:31:33.436153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:27:40.264 [2024-11-26 18:31:33.436162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.436283] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:40.264 [2024-11-26 18:31:33.437496] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:40.264 [2024-11-26 18:31:33.437531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.437542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:40.264 [2024-11-26 18:31:33.437551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:27:40.264 [2024-11-26 18:31:33.437560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.439229] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:40.264 [2024-11-26 18:31:33.461618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.461690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:40.264 [2024-11-26 18:31:33.461704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.430 ms 00:27:40.264 [2024-11-26 18:31:33.461729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.461916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.461931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:40.264 [2024-11-26 18:31:33.461941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:40.264 [2024-11-26 18:31:33.461949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.469702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.469750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:40.264 [2024-11-26 18:31:33.469763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.715 ms 00:27:40.264 [2024-11-26 18:31:33.469771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.469903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.469924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:40.264 [2024-11-26 18:31:33.469935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:40.264 [2024-11-26 18:31:33.469948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.469987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.469997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:40.264 [2024-11-26 18:31:33.470006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:40.264 [2024-11-26 18:31:33.470013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.470039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:40.264 [2024-11-26 18:31:33.475229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.475267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:40.264 [2024-11-26 18:31:33.475277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.208 ms 00:27:40.264 [2024-11-26 18:31:33.475286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.475375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.475386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:40.264 [2024-11-26 18:31:33.475394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:40.264 [2024-11-26 18:31:33.475407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.475429] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:40.264 [2024-11-26 18:31:33.475449] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:40.264 [2024-11-26 18:31:33.475483] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:40.264 [2024-11-26 18:31:33.475498] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:40.264 [2024-11-26 18:31:33.475589] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:40.264 [2024-11-26 18:31:33.475599] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:40.264 [2024-11-26 18:31:33.475612] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:40.264 [2024-11-26 18:31:33.475658] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:40.264 [2024-11-26 18:31:33.475669] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:40.264 [2024-11-26 18:31:33.475679] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:40.264 [2024-11-26 18:31:33.475687] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:40.264 [2024-11-26 18:31:33.475695] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:40.264 [2024-11-26 18:31:33.475703] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:40.264 [2024-11-26 18:31:33.475712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.475720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:40.264 [2024-11-26 18:31:33.475729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:27:40.264 [2024-11-26 18:31:33.475737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.475827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.264 [2024-11-26 18:31:33.475837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:40.264 [2024-11-26 18:31:33.475846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:40.264 [2024-11-26 18:31:33.475853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.264 [2024-11-26 18:31:33.475956] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:40.264 [2024-11-26 18:31:33.475973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:40.264 [2024-11-26 18:31:33.475983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.264 [2024-11-26 18:31:33.475991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.264 [2024-11-26 18:31:33.476000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:40.264 [2024-11-26 18:31:33.476009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:40.264 [2024-11-26 18:31:33.476017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:40.264 [2024-11-26 18:31:33.476024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:40.264 [2024-11-26 18:31:33.476032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:40.264 [2024-11-26 18:31:33.476039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.264 [2024-11-26 18:31:33.476047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:40.264 [2024-11-26 18:31:33.476074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:40.264 [2024-11-26 18:31:33.476083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.264 [2024-11-26 18:31:33.476091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:40.264 [2024-11-26 18:31:33.476099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:40.265 [2024-11-26 18:31:33.476106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.265 [2024-11-26 18:31:33.476114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:40.265 [2024-11-26 18:31:33.476121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:40.265 [2024-11-26 18:31:33.476128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.265 [2024-11-26 18:31:33.476136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:40.265 [2024-11-26 18:31:33.476144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:40.265 [2024-11-26 18:31:33.476151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.265 [2024-11-26 18:31:33.476158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:40.265 [2024-11-26 18:31:33.476165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:40.265 [2024-11-26 18:31:33.476172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.265 [2024-11-26 18:31:33.476179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:40.265 [2024-11-26 18:31:33.476186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:40.265 [2024-11-26 18:31:33.476193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.265 [2024-11-26 18:31:33.476200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:40.265 [2024-11-26 18:31:33.476207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:40.265 [2024-11-26 18:31:33.476214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:40.265 [2024-11-26 18:31:33.476222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:40.265 [2024-11-26 18:31:33.476230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:40.265 [2024-11-26 18:31:33.476237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.265 [2024-11-26 18:31:33.476245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:40.265 [2024-11-26 18:31:33.476253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:40.265 [2024-11-26 18:31:33.476259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.265 [2024-11-26 18:31:33.476268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:40.265 [2024-11-26 18:31:33.476275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:40.265 [2024-11-26 18:31:33.476282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.265 [2024-11-26 18:31:33.476289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:40.265 [2024-11-26 18:31:33.476296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:40.265 [2024-11-26 18:31:33.476303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.265 [2024-11-26 18:31:33.476311] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:40.265 [2024-11-26 18:31:33.476323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:40.265 [2024-11-26 18:31:33.476331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.265 [2024-11-26 18:31:33.476340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.265 [2024-11-26 18:31:33.476348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:40.265 [2024-11-26 18:31:33.476356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:40.265 [2024-11-26 18:31:33.476363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:40.265 [2024-11-26 18:31:33.476370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:40.265 [2024-11-26 18:31:33.476378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:40.265 [2024-11-26 18:31:33.476386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:40.265 [2024-11-26 18:31:33.476395] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:40.265 [2024-11-26 18:31:33.476406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.265 [2024-11-26 18:31:33.476415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:40.265 [2024-11-26 18:31:33.476424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:40.265 [2024-11-26 18:31:33.476431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:40.265 [2024-11-26 18:31:33.476440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:40.265 [2024-11-26 18:31:33.476448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:40.265 [2024-11-26 18:31:33.476456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:40.265 [2024-11-26 18:31:33.476463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:40.265 [2024-11-26 18:31:33.476471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:40.265 [2024-11-26 18:31:33.476478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:40.265 [2024-11-26 18:31:33.476487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:40.265 [2024-11-26 18:31:33.476495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:40.265 [2024-11-26 18:31:33.476503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:40.265 [2024-11-26 18:31:33.476511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:40.265 [2024-11-26 18:31:33.476519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:40.265 [2024-11-26 18:31:33.476527] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:40.265 [2024-11-26 18:31:33.476536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.265 [2024-11-26 18:31:33.476548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:40.265 [2024-11-26 18:31:33.476556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:40.265 [2024-11-26 18:31:33.476564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:40.265 [2024-11-26 18:31:33.476573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:40.265 [2024-11-26 18:31:33.476583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.265 [2024-11-26 18:31:33.476591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:40.265 [2024-11-26 18:31:33.476600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:27:40.265 [2024-11-26 18:31:33.476611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.265 [2024-11-26 18:31:33.516098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.265 [2024-11-26 18:31:33.516252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:40.265 [2024-11-26 18:31:33.516286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.487 ms 00:27:40.265 [2024-11-26 18:31:33.516313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.265 [2024-11-26 18:31:33.516498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.265 [2024-11-26 18:31:33.516537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:40.265 [2024-11-26 18:31:33.516565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:40.265 [2024-11-26 18:31:33.516586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.265 [2024-11-26 18:31:33.571339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.265 [2024-11-26 18:31:33.571487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:40.265 [2024-11-26 18:31:33.571521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.798 ms 00:27:40.265 [2024-11-26 18:31:33.571542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.265 [2024-11-26 18:31:33.571734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.265 [2024-11-26 18:31:33.571779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:40.265 [2024-11-26 18:31:33.571814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:40.265 [2024-11-26 18:31:33.571847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.265 [2024-11-26 18:31:33.572339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.265 [2024-11-26 18:31:33.572383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:40.265 [2024-11-26 18:31:33.572423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:27:40.265 [2024-11-26 18:31:33.572453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.265 [2024-11-26 18:31:33.572593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.265 [2024-11-26 18:31:33.572652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:40.265 [2024-11-26 18:31:33.572700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:27:40.265 [2024-11-26 18:31:33.572725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.265 [2024-11-26 18:31:33.591981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.265 [2024-11-26 18:31:33.592116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:40.265 [2024-11-26 18:31:33.592148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.207 ms 00:27:40.265 [2024-11-26 18:31:33.592186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.612815] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:40.525 [2024-11-26 18:31:33.612981] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:40.525 [2024-11-26 18:31:33.613025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.613046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:40.525 [2024-11-26 18:31:33.613069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.691 ms 00:27:40.525 [2024-11-26 18:31:33.613088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.648717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.648942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:40.525 [2024-11-26 18:31:33.648983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.521 ms 00:27:40.525 [2024-11-26 18:31:33.649007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.672050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.672215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:40.525 [2024-11-26 18:31:33.672270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.879 ms 00:27:40.525 [2024-11-26 18:31:33.672295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.695677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.695857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:40.525 [2024-11-26 18:31:33.695895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.220 ms 00:27:40.525 [2024-11-26 18:31:33.695922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.696989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.697072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:40.525 [2024-11-26 18:31:33.697111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:27:40.525 [2024-11-26 18:31:33.697141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.801688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.801866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:40.525 [2024-11-26 18:31:33.801905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.690 ms 00:27:40.525 [2024-11-26 18:31:33.801929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.818151] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:40.525 [2024-11-26 18:31:33.836859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.836928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:40.525 [2024-11-26 18:31:33.836952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.778 ms 00:27:40.525 [2024-11-26 18:31:33.836962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.837097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.837111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:40.525 [2024-11-26 18:31:33.837121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:40.525 [2024-11-26 18:31:33.837129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.837191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.837201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:40.525 [2024-11-26 18:31:33.837216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:40.525 [2024-11-26 18:31:33.837228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.837267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.837283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:40.525 [2024-11-26 18:31:33.837293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:40.525 [2024-11-26 18:31:33.837302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.525 [2024-11-26 18:31:33.837343] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:40.525 [2024-11-26 18:31:33.837355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.525 [2024-11-26 18:31:33.837364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:40.525 [2024-11-26 18:31:33.837372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:40.525 [2024-11-26 18:31:33.837381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.784 [2024-11-26 18:31:33.883335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.784 [2024-11-26 18:31:33.883411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:40.784 [2024-11-26 18:31:33.883429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.013 ms 00:27:40.784 [2024-11-26 18:31:33.883438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.784 [2024-11-26 18:31:33.883678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.784 [2024-11-26 18:31:33.883710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:40.784 [2024-11-26 18:31:33.883720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:40.784 [2024-11-26 18:31:33.883733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.784 [2024-11-26 18:31:33.884977] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:40.784 [2024-11-26 18:31:33.891211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 452.889 ms, result 0 00:27:40.784 [2024-11-26 18:31:33.892029] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:40.784 [2024-11-26 18:31:33.914108] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:41.723  [2024-11-26T18:31:35.997Z] Copying: 32/256 [MB] (32 MBps) [2024-11-26T18:31:36.935Z] Copying: 60/256 [MB] (27 MBps) [2024-11-26T18:31:38.317Z] Copying: 87/256 [MB] (27 MBps) [2024-11-26T18:31:39.257Z] Copying: 113/256 [MB] (25 MBps) [2024-11-26T18:31:40.202Z] Copying: 137/256 [MB] (24 MBps) [2024-11-26T18:31:41.142Z] Copying: 162/256 [MB] (24 MBps) [2024-11-26T18:31:42.081Z] Copying: 187/256 [MB] (24 MBps) [2024-11-26T18:31:43.016Z] Copying: 215/256 [MB] (27 MBps) [2024-11-26T18:31:43.583Z] Copying: 242/256 [MB] (26 MBps) [2024-11-26T18:31:43.583Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-26 18:31:43.397107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:50.248 [2024-11-26 18:31:43.411267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.248 [2024-11-26 18:31:43.411364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:50.248 [2024-11-26 18:31:43.411400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:50.248 [2024-11-26 18:31:43.411420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.248 [2024-11-26 18:31:43.411453] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:50.248 [2024-11-26 18:31:43.415332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.248 [2024-11-26 18:31:43.415391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:50.248 [2024-11-26 18:31:43.415416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.831 ms 00:27:50.248 [2024-11-26 18:31:43.415450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.248 [2024-11-26 18:31:43.415697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.248 [2024-11-26 18:31:43.415733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:50.248 [2024-11-26 18:31:43.415759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:27:50.248 [2024-11-26 18:31:43.415777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.248 [2024-11-26 18:31:43.418547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.248 [2024-11-26 18:31:43.418590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:50.248 [2024-11-26 18:31:43.418629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:27:50.248 [2024-11-26 18:31:43.418653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.248 [2024-11-26 18:31:43.423887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.248 [2024-11-26 18:31:43.423941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:50.248 [2024-11-26 18:31:43.423963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.216 ms 00:27:50.248 [2024-11-26 18:31:43.423997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.248 [2024-11-26 18:31:43.456924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.248 [2024-11-26 18:31:43.456993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:50.249 [2024-11-26 18:31:43.457035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.923 ms 00:27:50.249 [2024-11-26 18:31:43.457053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.249 [2024-11-26 18:31:43.477119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.249 [2024-11-26 18:31:43.477192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:50.249 [2024-11-26 18:31:43.477205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.060 ms 00:27:50.249 [2024-11-26 18:31:43.477212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.249 [2024-11-26 18:31:43.477348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.249 [2024-11-26 18:31:43.477359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:50.249 [2024-11-26 18:31:43.477388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:50.249 [2024-11-26 18:31:43.477396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.249 [2024-11-26 18:31:43.513595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.249 [2024-11-26 18:31:43.513681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:50.249 [2024-11-26 18:31:43.513697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.253 ms 00:27:50.249 [2024-11-26 18:31:43.513704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.249 [2024-11-26 18:31:43.548557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.249 [2024-11-26 18:31:43.548591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:50.249 [2024-11-26 18:31:43.548601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.866 ms 00:27:50.249 [2024-11-26 18:31:43.548623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.509 [2024-11-26 18:31:43.581191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.510 [2024-11-26 18:31:43.581225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:50.510 [2024-11-26 18:31:43.581234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.586 ms 00:27:50.510 [2024-11-26 18:31:43.581240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.510 [2024-11-26 18:31:43.614265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.510 [2024-11-26 18:31:43.614298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:50.510 [2024-11-26 18:31:43.614307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.029 ms 00:27:50.510 [2024-11-26 18:31:43.614313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.510 [2024-11-26 18:31:43.614346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:50.510 [2024-11-26 18:31:43.614359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:50.510 [2024-11-26 18:31:43.614895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.614994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:50.511 [2024-11-26 18:31:43.615139] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:50.511 [2024-11-26 18:31:43.615146] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b 00:27:50.511 [2024-11-26 18:31:43.615152] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:50.511 [2024-11-26 18:31:43.615159] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:50.511 [2024-11-26 18:31:43.615165] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:50.511 [2024-11-26 18:31:43.615173] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:50.511 [2024-11-26 18:31:43.615179] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:50.511 [2024-11-26 18:31:43.615192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:50.511 [2024-11-26 18:31:43.615199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:50.511 [2024-11-26 18:31:43.615205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:50.511 [2024-11-26 18:31:43.615211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:50.511 [2024-11-26 18:31:43.615218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.511 [2024-11-26 18:31:43.615225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:50.511 [2024-11-26 18:31:43.615233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:27:50.511 [2024-11-26 18:31:43.615240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.511 [2024-11-26 18:31:43.634644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.511 [2024-11-26 18:31:43.634673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:50.511 [2024-11-26 18:31:43.634711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.422 ms 00:27:50.511 [2024-11-26 18:31:43.634724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.511 [2024-11-26 18:31:43.635232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.511 [2024-11-26 18:31:43.635245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:50.511 [2024-11-26 18:31:43.635254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:27:50.511 [2024-11-26 18:31:43.635261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.511 [2024-11-26 18:31:43.686045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.511 [2024-11-26 18:31:43.686077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:50.511 [2024-11-26 18:31:43.686110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.511 [2024-11-26 18:31:43.686117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.511 [2024-11-26 18:31:43.686186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.511 [2024-11-26 18:31:43.686195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:50.511 [2024-11-26 18:31:43.686203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.511 [2024-11-26 18:31:43.686210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.511 [2024-11-26 18:31:43.686261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.511 [2024-11-26 18:31:43.686272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:50.511 [2024-11-26 18:31:43.686279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.511 [2024-11-26 18:31:43.686292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.511 [2024-11-26 18:31:43.686309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.511 [2024-11-26 18:31:43.686316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:50.511 [2024-11-26 18:31:43.686324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.511 [2024-11-26 18:31:43.686330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.511 [2024-11-26 18:31:43.799375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.511 [2024-11-26 18:31:43.799433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:50.511 [2024-11-26 18:31:43.799444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.511 [2024-11-26 18:31:43.799456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.771 [2024-11-26 18:31:43.891772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.771 [2024-11-26 18:31:43.891902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:50.771 [2024-11-26 18:31:43.891917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.771 [2024-11-26 18:31:43.891941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.771 [2024-11-26 18:31:43.892015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.771 [2024-11-26 18:31:43.892024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:50.771 [2024-11-26 18:31:43.892032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.771 [2024-11-26 18:31:43.892038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.771 [2024-11-26 18:31:43.892070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.771 [2024-11-26 18:31:43.892078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:50.771 [2024-11-26 18:31:43.892085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.771 [2024-11-26 18:31:43.892092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.771 [2024-11-26 18:31:43.892219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.771 [2024-11-26 18:31:43.892230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:50.771 [2024-11-26 18:31:43.892238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.771 [2024-11-26 18:31:43.892245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.771 [2024-11-26 18:31:43.892279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.771 [2024-11-26 18:31:43.892293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:50.771 [2024-11-26 18:31:43.892301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.771 [2024-11-26 18:31:43.892308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.771 [2024-11-26 18:31:43.892344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.771 [2024-11-26 18:31:43.892352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:50.771 [2024-11-26 18:31:43.892359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.771 [2024-11-26 18:31:43.892366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.771 [2024-11-26 18:31:43.892408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:50.771 [2024-11-26 18:31:43.892417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:50.771 [2024-11-26 18:31:43.892424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:50.771 [2024-11-26 18:31:43.892432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.771 [2024-11-26 18:31:43.892564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 482.210 ms, result 0 00:27:51.709 00:27:51.709 00:27:51.709 18:31:44 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:27:51.709 18:31:44 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:27:52.278 18:31:45 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:52.278 [2024-11-26 18:31:45.418858] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:27:52.278 [2024-11-26 18:31:45.418984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79575 ] 00:27:52.278 [2024-11-26 18:31:45.592118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.538 [2024-11-26 18:31:45.706920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.797 [2024-11-26 18:31:46.040197] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:52.797 [2024-11-26 18:31:46.040265] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:53.059 [2024-11-26 18:31:46.195528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.195578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:53.059 [2024-11-26 18:31:46.195606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:53.059 [2024-11-26 18:31:46.195614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.198344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.198382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:53.059 [2024-11-26 18:31:46.198392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:27:53.059 [2024-11-26 18:31:46.198399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.198497] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:53.059 [2024-11-26 18:31:46.199550] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:53.059 [2024-11-26 18:31:46.199581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.199589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:53.059 [2024-11-26 18:31:46.199597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:27:53.059 [2024-11-26 18:31:46.199604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.201051] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:53.059 [2024-11-26 18:31:46.219492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.219531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:53.059 [2024-11-26 18:31:46.219542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.478 ms 00:27:53.059 [2024-11-26 18:31:46.219549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.219658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.219670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:53.059 [2024-11-26 18:31:46.219679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:53.059 [2024-11-26 18:31:46.219686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.226330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.226357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:53.059 [2024-11-26 18:31:46.226366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.620 ms 00:27:53.059 [2024-11-26 18:31:46.226373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.226474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.226487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:53.059 [2024-11-26 18:31:46.226496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:53.059 [2024-11-26 18:31:46.226503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.226534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.226543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:53.059 [2024-11-26 18:31:46.226551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:53.059 [2024-11-26 18:31:46.226558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.226579] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:53.059 [2024-11-26 18:31:46.231118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.231167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:53.059 [2024-11-26 18:31:46.231178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.554 ms 00:27:53.059 [2024-11-26 18:31:46.231184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.231257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.231267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:53.059 [2024-11-26 18:31:46.231274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:53.059 [2024-11-26 18:31:46.231281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.231303] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:53.059 [2024-11-26 18:31:46.231322] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:53.059 [2024-11-26 18:31:46.231355] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:53.059 [2024-11-26 18:31:46.231369] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:53.059 [2024-11-26 18:31:46.231452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:53.059 [2024-11-26 18:31:46.231462] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:53.059 [2024-11-26 18:31:46.231471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:53.059 [2024-11-26 18:31:46.231484] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:53.059 [2024-11-26 18:31:46.231492] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:53.059 [2024-11-26 18:31:46.231500] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:53.059 [2024-11-26 18:31:46.231508] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:53.059 [2024-11-26 18:31:46.231515] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:53.059 [2024-11-26 18:31:46.231521] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:53.059 [2024-11-26 18:31:46.231529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.231536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:53.059 [2024-11-26 18:31:46.231543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:27:53.059 [2024-11-26 18:31:46.231549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.231636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.059 [2024-11-26 18:31:46.231657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:53.059 [2024-11-26 18:31:46.231666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:53.059 [2024-11-26 18:31:46.231673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.059 [2024-11-26 18:31:46.231759] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:53.059 [2024-11-26 18:31:46.231769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:53.059 [2024-11-26 18:31:46.231776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:53.059 [2024-11-26 18:31:46.231783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.059 [2024-11-26 18:31:46.231790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:53.059 [2024-11-26 18:31:46.231797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:53.059 [2024-11-26 18:31:46.231804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:53.059 [2024-11-26 18:31:46.231811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:53.059 [2024-11-26 18:31:46.231818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:53.059 [2024-11-26 18:31:46.231824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:53.059 [2024-11-26 18:31:46.231830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:53.059 [2024-11-26 18:31:46.231848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:53.059 [2024-11-26 18:31:46.231854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:53.059 [2024-11-26 18:31:46.231861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:53.059 [2024-11-26 18:31:46.231867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:53.059 [2024-11-26 18:31:46.231873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.059 [2024-11-26 18:31:46.231880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:53.059 [2024-11-26 18:31:46.231887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:53.059 [2024-11-26 18:31:46.231893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.059 [2024-11-26 18:31:46.231900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:53.059 [2024-11-26 18:31:46.231906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:53.059 [2024-11-26 18:31:46.231914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.059 [2024-11-26 18:31:46.231919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:53.059 [2024-11-26 18:31:46.231926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:53.059 [2024-11-26 18:31:46.231932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.059 [2024-11-26 18:31:46.231938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:53.059 [2024-11-26 18:31:46.231944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:53.059 [2024-11-26 18:31:46.231951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.059 [2024-11-26 18:31:46.231956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:53.060 [2024-11-26 18:31:46.231962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:53.060 [2024-11-26 18:31:46.231968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:53.060 [2024-11-26 18:31:46.231974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:53.060 [2024-11-26 18:31:46.231980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:53.060 [2024-11-26 18:31:46.231986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:53.060 [2024-11-26 18:31:46.231992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:53.060 [2024-11-26 18:31:46.231999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:53.060 [2024-11-26 18:31:46.232005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:53.060 [2024-11-26 18:31:46.232011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:53.060 [2024-11-26 18:31:46.232017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:53.060 [2024-11-26 18:31:46.232023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.060 [2024-11-26 18:31:46.232030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:53.060 [2024-11-26 18:31:46.232037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:53.060 [2024-11-26 18:31:46.232042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.060 [2024-11-26 18:31:46.232048] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:53.060 [2024-11-26 18:31:46.232056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:53.060 [2024-11-26 18:31:46.232066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:53.060 [2024-11-26 18:31:46.232073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:53.060 [2024-11-26 18:31:46.232080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:53.060 [2024-11-26 18:31:46.232087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:53.060 [2024-11-26 18:31:46.232093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:53.060 [2024-11-26 18:31:46.232100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:53.060 [2024-11-26 18:31:46.232106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:53.060 [2024-11-26 18:31:46.232113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:53.060 [2024-11-26 18:31:46.232120] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:53.060 [2024-11-26 18:31:46.232129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:53.060 [2024-11-26 18:31:46.232137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:53.060 [2024-11-26 18:31:46.232144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:53.060 [2024-11-26 18:31:46.232150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:53.060 [2024-11-26 18:31:46.232157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:53.060 [2024-11-26 18:31:46.232164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:53.060 [2024-11-26 18:31:46.232171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:53.060 [2024-11-26 18:31:46.232178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:53.060 [2024-11-26 18:31:46.232184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:53.060 [2024-11-26 18:31:46.232191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:53.060 [2024-11-26 18:31:46.232198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:53.060 [2024-11-26 18:31:46.232205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:53.060 [2024-11-26 18:31:46.232211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:53.060 [2024-11-26 18:31:46.232218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:53.060 [2024-11-26 18:31:46.232224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:53.060 [2024-11-26 18:31:46.232231] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:53.060 [2024-11-26 18:31:46.232246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:53.060 [2024-11-26 18:31:46.232255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:53.060 [2024-11-26 18:31:46.232263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:53.060 [2024-11-26 18:31:46.232270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:53.060 [2024-11-26 18:31:46.232277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:53.060 [2024-11-26 18:31:46.232284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.060 [2024-11-26 18:31:46.232295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:53.060 [2024-11-26 18:31:46.232303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:27:53.060 [2024-11-26 18:31:46.232309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.060 [2024-11-26 18:31:46.270134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.060 [2024-11-26 18:31:46.270175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:53.060 [2024-11-26 18:31:46.270202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.841 ms 00:27:53.060 [2024-11-26 18:31:46.270210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.060 [2024-11-26 18:31:46.270333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.060 [2024-11-26 18:31:46.270343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:53.060 [2024-11-26 18:31:46.270352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:53.060 [2024-11-26 18:31:46.270359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.060 [2024-11-26 18:31:46.347465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.060 [2024-11-26 18:31:46.347508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:53.060 [2024-11-26 18:31:46.347521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.234 ms 00:27:53.060 [2024-11-26 18:31:46.347529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.060 [2024-11-26 18:31:46.347655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.060 [2024-11-26 18:31:46.347666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:53.060 [2024-11-26 18:31:46.347674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:53.060 [2024-11-26 18:31:46.347683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.060 [2024-11-26 18:31:46.348121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.060 [2024-11-26 18:31:46.348139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:53.060 [2024-11-26 18:31:46.348152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:27:53.060 [2024-11-26 18:31:46.348160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.060 [2024-11-26 18:31:46.348267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.060 [2024-11-26 18:31:46.348283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:53.060 [2024-11-26 18:31:46.348291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:53.060 [2024-11-26 18:31:46.348298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.060 [2024-11-26 18:31:46.366458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.060 [2024-11-26 18:31:46.366494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:53.060 [2024-11-26 18:31:46.366519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.173 ms 00:27:53.060 [2024-11-26 18:31:46.366526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.060 [2024-11-26 18:31:46.384259] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:27:53.060 [2024-11-26 18:31:46.384295] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:53.060 [2024-11-26 18:31:46.384306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.060 [2024-11-26 18:31:46.384314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:53.060 [2024-11-26 18:31:46.384321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.711 ms 00:27:53.060 [2024-11-26 18:31:46.384344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.320 [2024-11-26 18:31:46.411303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.320 [2024-11-26 18:31:46.411341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:53.320 [2024-11-26 18:31:46.411351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.938 ms 00:27:53.320 [2024-11-26 18:31:46.411358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.320 [2024-11-26 18:31:46.427537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.320 [2024-11-26 18:31:46.427571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:53.320 [2024-11-26 18:31:46.427597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.127 ms 00:27:53.320 [2024-11-26 18:31:46.427603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.444329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.321 [2024-11-26 18:31:46.444363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:53.321 [2024-11-26 18:31:46.444373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.689 ms 00:27:53.321 [2024-11-26 18:31:46.444379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.445144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.321 [2024-11-26 18:31:46.445173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:53.321 [2024-11-26 18:31:46.445182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:27:53.321 [2024-11-26 18:31:46.445190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.529400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.321 [2024-11-26 18:31:46.529458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:53.321 [2024-11-26 18:31:46.529472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.346 ms 00:27:53.321 [2024-11-26 18:31:46.529496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.541569] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:53.321 [2024-11-26 18:31:46.557998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.321 [2024-11-26 18:31:46.558049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:53.321 [2024-11-26 18:31:46.558061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.438 ms 00:27:53.321 [2024-11-26 18:31:46.558092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.558200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.321 [2024-11-26 18:31:46.558224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:53.321 [2024-11-26 18:31:46.558233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:53.321 [2024-11-26 18:31:46.558240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.558310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.321 [2024-11-26 18:31:46.558331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:53.321 [2024-11-26 18:31:46.558338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:53.321 [2024-11-26 18:31:46.558348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.558383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.321 [2024-11-26 18:31:46.558396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:53.321 [2024-11-26 18:31:46.558404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:53.321 [2024-11-26 18:31:46.558410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.558443] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:53.321 [2024-11-26 18:31:46.558452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.321 [2024-11-26 18:31:46.558459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:53.321 [2024-11-26 18:31:46.558466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:53.321 [2024-11-26 18:31:46.558473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.595696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.321 [2024-11-26 18:31:46.595738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:53.321 [2024-11-26 18:31:46.595767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.272 ms 00:27:53.321 [2024-11-26 18:31:46.595776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.595897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.321 [2024-11-26 18:31:46.595908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:53.321 [2024-11-26 18:31:46.595917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:53.321 [2024-11-26 18:31:46.595924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.321 [2024-11-26 18:31:46.597014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:53.321 [2024-11-26 18:31:46.601786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.824 ms, result 0 00:27:53.321 [2024-11-26 18:31:46.602524] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:53.321 [2024-11-26 18:31:46.621496] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:53.581  [2024-11-26T18:31:46.916Z] Copying: 4096/4096 [kB] (average 25 MBps)[2024-11-26 18:31:46.785946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:53.581 [2024-11-26 18:31:46.799995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.581 [2024-11-26 18:31:46.800041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:53.581 [2024-11-26 18:31:46.800074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:53.581 [2024-11-26 18:31:46.800082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.581 [2024-11-26 18:31:46.800102] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:53.581 [2024-11-26 18:31:46.804012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.581 [2024-11-26 18:31:46.804039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:53.581 [2024-11-26 18:31:46.804047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.905 ms 00:27:53.581 [2024-11-26 18:31:46.804070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.581 [2024-11-26 18:31:46.806147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.581 [2024-11-26 18:31:46.806184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:53.581 [2024-11-26 18:31:46.806211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.060 ms 00:27:53.582 [2024-11-26 18:31:46.806219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.582 [2024-11-26 18:31:46.809398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.582 [2024-11-26 18:31:46.809430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:53.582 [2024-11-26 18:31:46.809439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.164 ms 00:27:53.582 [2024-11-26 18:31:46.809446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.582 [2024-11-26 18:31:46.814931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.582 [2024-11-26 18:31:46.814962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:53.582 [2024-11-26 18:31:46.814987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.469 ms 00:27:53.582 [2024-11-26 18:31:46.814994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.582 [2024-11-26 18:31:46.848947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.582 [2024-11-26 18:31:46.848984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:53.582 [2024-11-26 18:31:46.849010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.953 ms 00:27:53.582 [2024-11-26 18:31:46.849018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.582 [2024-11-26 18:31:46.868999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.582 [2024-11-26 18:31:46.869045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:53.582 [2024-11-26 18:31:46.869056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.975 ms 00:27:53.582 [2024-11-26 18:31:46.869063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.582 [2024-11-26 18:31:46.869229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.582 [2024-11-26 18:31:46.869239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:53.582 [2024-11-26 18:31:46.869265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:53.582 [2024-11-26 18:31:46.869272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.582 [2024-11-26 18:31:46.903349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.582 [2024-11-26 18:31:46.903384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:53.582 [2024-11-26 18:31:46.903393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.126 ms 00:27:53.582 [2024-11-26 18:31:46.903400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.842 [2024-11-26 18:31:46.937761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.842 [2024-11-26 18:31:46.937799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:53.842 [2024-11-26 18:31:46.937809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.368 ms 00:27:53.842 [2024-11-26 18:31:46.937816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.842 [2024-11-26 18:31:46.972170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.842 [2024-11-26 18:31:46.972207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:53.842 [2024-11-26 18:31:46.972218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.374 ms 00:27:53.842 [2024-11-26 18:31:46.972224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.842 [2024-11-26 18:31:47.006123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.842 [2024-11-26 18:31:47.006159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:53.842 [2024-11-26 18:31:47.006185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.862 ms 00:27:53.842 [2024-11-26 18:31:47.006192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.842 [2024-11-26 18:31:47.006236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:53.842 [2024-11-26 18:31:47.006249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:53.842 [2024-11-26 18:31:47.006427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.006986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:53.843 [2024-11-26 18:31:47.007000] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:53.843 [2024-11-26 18:31:47.007007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b 00:27:53.843 [2024-11-26 18:31:47.007015] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:53.843 [2024-11-26 18:31:47.007021] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:53.843 [2024-11-26 18:31:47.007028] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:53.843 [2024-11-26 18:31:47.007035] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:53.843 [2024-11-26 18:31:47.007042] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:53.843 [2024-11-26 18:31:47.007049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:53.843 [2024-11-26 18:31:47.007062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:53.843 [2024-11-26 18:31:47.007069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:53.843 [2024-11-26 18:31:47.007075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:53.843 [2024-11-26 18:31:47.007082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.843 [2024-11-26 18:31:47.007090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:53.843 [2024-11-26 18:31:47.007098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:27:53.843 [2024-11-26 18:31:47.007106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.843 [2024-11-26 18:31:47.026160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.843 [2024-11-26 18:31:47.026193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:53.843 [2024-11-26 18:31:47.026203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.072 ms 00:27:53.843 [2024-11-26 18:31:47.026210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.844 [2024-11-26 18:31:47.026805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.844 [2024-11-26 18:31:47.026824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:53.844 [2024-11-26 18:31:47.026832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:27:53.844 [2024-11-26 18:31:47.026839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.844 [2024-11-26 18:31:47.077393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.844 [2024-11-26 18:31:47.077430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:53.844 [2024-11-26 18:31:47.077456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.844 [2024-11-26 18:31:47.077470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.844 [2024-11-26 18:31:47.077554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.844 [2024-11-26 18:31:47.077563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:53.844 [2024-11-26 18:31:47.077570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.844 [2024-11-26 18:31:47.077578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.844 [2024-11-26 18:31:47.077642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.844 [2024-11-26 18:31:47.077653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:53.844 [2024-11-26 18:31:47.077661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.844 [2024-11-26 18:31:47.077668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.844 [2024-11-26 18:31:47.077692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.844 [2024-11-26 18:31:47.077700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:53.844 [2024-11-26 18:31:47.077707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.844 [2024-11-26 18:31:47.077714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.104 [2024-11-26 18:31:47.192294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.104 [2024-11-26 18:31:47.192354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:54.104 [2024-11-26 18:31:47.192365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.104 [2024-11-26 18:31:47.192397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.104 [2024-11-26 18:31:47.288000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.104 [2024-11-26 18:31:47.288058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:54.104 [2024-11-26 18:31:47.288069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.104 [2024-11-26 18:31:47.288077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.104 [2024-11-26 18:31:47.288170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.104 [2024-11-26 18:31:47.288180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:54.104 [2024-11-26 18:31:47.288188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.104 [2024-11-26 18:31:47.288195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.104 [2024-11-26 18:31:47.288220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.104 [2024-11-26 18:31:47.288233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:54.104 [2024-11-26 18:31:47.288240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.104 [2024-11-26 18:31:47.288247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.104 [2024-11-26 18:31:47.288347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.104 [2024-11-26 18:31:47.288357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:54.104 [2024-11-26 18:31:47.288365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.104 [2024-11-26 18:31:47.288372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.104 [2024-11-26 18:31:47.288406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.104 [2024-11-26 18:31:47.288415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:54.104 [2024-11-26 18:31:47.288426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.104 [2024-11-26 18:31:47.288432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.104 [2024-11-26 18:31:47.288470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.104 [2024-11-26 18:31:47.288480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:54.104 [2024-11-26 18:31:47.288486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.104 [2024-11-26 18:31:47.288493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.104 [2024-11-26 18:31:47.288550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.104 [2024-11-26 18:31:47.288562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:54.104 [2024-11-26 18:31:47.288569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.104 [2024-11-26 18:31:47.288577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.104 [2024-11-26 18:31:47.288731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 489.662 ms, result 0 00:27:55.044 00:27:55.044 00:27:55.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.044 18:31:48 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79607 00:27:55.044 18:31:48 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79607 00:27:55.044 18:31:48 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79607 ']' 00:27:55.044 18:31:48 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.044 18:31:48 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.044 18:31:48 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.044 18:31:48 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:27:55.044 18:31:48 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.044 18:31:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:55.304 [2024-11-26 18:31:48.435091] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:27:55.304 [2024-11-26 18:31:48.435203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79607 ] 00:27:55.304 [2024-11-26 18:31:48.604854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.564 [2024-11-26 18:31:48.717610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.502 18:31:49 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.502 18:31:49 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:27:56.502 18:31:49 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:27:56.502 [2024-11-26 18:31:49.743239] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:56.502 [2024-11-26 18:31:49.743298] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:56.763 [2024-11-26 18:31:49.920662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.920715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:56.763 [2024-11-26 18:31:49.920747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:56.763 [2024-11-26 18:31:49.920762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.924189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.924221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:56.763 [2024-11-26 18:31:49.924232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.416 ms 00:27:56.763 [2024-11-26 18:31:49.924240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.924347] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:56.763 [2024-11-26 18:31:49.925295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:56.763 [2024-11-26 18:31:49.925329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.925338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:56.763 [2024-11-26 18:31:49.925348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:27:56.763 [2024-11-26 18:31:49.925358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.926810] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:56.763 [2024-11-26 18:31:49.945137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.945178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:56.763 [2024-11-26 18:31:49.945189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.367 ms 00:27:56.763 [2024-11-26 18:31:49.945213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.945294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.945306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:56.763 [2024-11-26 18:31:49.945315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:56.763 [2024-11-26 18:31:49.945324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.952015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.952056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:56.763 [2024-11-26 18:31:49.952066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.658 ms 00:27:56.763 [2024-11-26 18:31:49.952075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.952188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.952204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:56.763 [2024-11-26 18:31:49.952212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:56.763 [2024-11-26 18:31:49.952224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.952252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.952262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:56.763 [2024-11-26 18:31:49.952270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:56.763 [2024-11-26 18:31:49.952278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.952303] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:56.763 [2024-11-26 18:31:49.956968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.956996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:56.763 [2024-11-26 18:31:49.957007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.680 ms 00:27:56.763 [2024-11-26 18:31:49.957015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.957076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.957086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:56.763 [2024-11-26 18:31:49.957098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:56.763 [2024-11-26 18:31:49.957105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.957126] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:56.763 [2024-11-26 18:31:49.957143] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:56.763 [2024-11-26 18:31:49.957186] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:56.763 [2024-11-26 18:31:49.957203] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:56.763 [2024-11-26 18:31:49.957288] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:56.763 [2024-11-26 18:31:49.957302] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:56.763 [2024-11-26 18:31:49.957318] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:56.763 [2024-11-26 18:31:49.957327] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:56.763 [2024-11-26 18:31:49.957338] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:56.763 [2024-11-26 18:31:49.957346] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:56.763 [2024-11-26 18:31:49.957354] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:56.763 [2024-11-26 18:31:49.957361] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:56.763 [2024-11-26 18:31:49.957372] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:56.763 [2024-11-26 18:31:49.957380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.957389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:56.763 [2024-11-26 18:31:49.957397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:27:56.763 [2024-11-26 18:31:49.957424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.957501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.763 [2024-11-26 18:31:49.957526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:56.763 [2024-11-26 18:31:49.957534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:56.763 [2024-11-26 18:31:49.957543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.763 [2024-11-26 18:31:49.957646] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:56.763 [2024-11-26 18:31:49.957662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:56.763 [2024-11-26 18:31:49.957670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:56.763 [2024-11-26 18:31:49.957680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.763 [2024-11-26 18:31:49.957689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:56.763 [2024-11-26 18:31:49.957697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:56.763 [2024-11-26 18:31:49.957704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:56.763 [2024-11-26 18:31:49.957716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:56.763 [2024-11-26 18:31:49.957723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:56.763 [2024-11-26 18:31:49.957732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:56.763 [2024-11-26 18:31:49.957739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:56.763 [2024-11-26 18:31:49.957748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:56.763 [2024-11-26 18:31:49.957754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:56.763 [2024-11-26 18:31:49.957763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:56.763 [2024-11-26 18:31:49.957770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:56.763 [2024-11-26 18:31:49.957778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.763 [2024-11-26 18:31:49.957785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:56.764 [2024-11-26 18:31:49.957794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:56.764 [2024-11-26 18:31:49.957810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.764 [2024-11-26 18:31:49.957820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:56.764 [2024-11-26 18:31:49.957826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:56.764 [2024-11-26 18:31:49.957843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.764 [2024-11-26 18:31:49.957850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:56.764 [2024-11-26 18:31:49.957869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:56.764 [2024-11-26 18:31:49.957877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.764 [2024-11-26 18:31:49.957888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:56.764 [2024-11-26 18:31:49.957895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:56.764 [2024-11-26 18:31:49.957905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.764 [2024-11-26 18:31:49.957912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:56.764 [2024-11-26 18:31:49.957923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:56.764 [2024-11-26 18:31:49.957930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.764 [2024-11-26 18:31:49.957940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:56.764 [2024-11-26 18:31:49.957947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:56.764 [2024-11-26 18:31:49.957959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:56.764 [2024-11-26 18:31:49.957967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:56.764 [2024-11-26 18:31:49.957978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:56.764 [2024-11-26 18:31:49.957997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:56.764 [2024-11-26 18:31:49.958008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:56.764 [2024-11-26 18:31:49.958015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:56.764 [2024-11-26 18:31:49.958029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.764 [2024-11-26 18:31:49.958036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:56.764 [2024-11-26 18:31:49.958046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:56.764 [2024-11-26 18:31:49.958053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.764 [2024-11-26 18:31:49.958063] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:56.764 [2024-11-26 18:31:49.958075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:56.764 [2024-11-26 18:31:49.958085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:56.764 [2024-11-26 18:31:49.958093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.764 [2024-11-26 18:31:49.958104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:56.764 [2024-11-26 18:31:49.958111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:56.764 [2024-11-26 18:31:49.958121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:56.764 [2024-11-26 18:31:49.958128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:56.764 [2024-11-26 18:31:49.958138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:56.764 [2024-11-26 18:31:49.958144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:56.764 [2024-11-26 18:31:49.958157] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:56.764 [2024-11-26 18:31:49.958166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.764 [2024-11-26 18:31:49.958181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:56.764 [2024-11-26 18:31:49.958189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:56.764 [2024-11-26 18:31:49.958201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:56.764 [2024-11-26 18:31:49.958208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:56.764 [2024-11-26 18:31:49.958219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:56.764 [2024-11-26 18:31:49.958226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:56.764 [2024-11-26 18:31:49.958237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:56.764 [2024-11-26 18:31:49.958244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:56.764 [2024-11-26 18:31:49.958255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:56.764 [2024-11-26 18:31:49.958262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:56.764 [2024-11-26 18:31:49.958274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:56.764 [2024-11-26 18:31:49.958281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:56.764 [2024-11-26 18:31:49.958291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:56.764 [2024-11-26 18:31:49.958299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:56.764 [2024-11-26 18:31:49.958309] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:56.764 [2024-11-26 18:31:49.958318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.764 [2024-11-26 18:31:49.958332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:56.764 [2024-11-26 18:31:49.958340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:56.764 [2024-11-26 18:31:49.958350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:56.764 [2024-11-26 18:31:49.958358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:56.764 [2024-11-26 18:31:49.958369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.764 [2024-11-26 18:31:49.958378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:56.764 [2024-11-26 18:31:49.958390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:27:56.764 [2024-11-26 18:31:49.958400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.764 [2024-11-26 18:31:49.996185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.764 [2024-11-26 18:31:49.996231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:56.764 [2024-11-26 18:31:49.996261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.794 ms 00:27:56.764 [2024-11-26 18:31:49.996271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.764 [2024-11-26 18:31:49.996396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.764 [2024-11-26 18:31:49.996406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:56.764 [2024-11-26 18:31:49.996415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:56.764 [2024-11-26 18:31:49.996422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.764 [2024-11-26 18:31:50.041806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.764 [2024-11-26 18:31:50.041849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:56.764 [2024-11-26 18:31:50.041866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.445 ms 00:27:56.764 [2024-11-26 18:31:50.041875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.764 [2024-11-26 18:31:50.041974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.764 [2024-11-26 18:31:50.041986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:56.764 [2024-11-26 18:31:50.041999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:56.764 [2024-11-26 18:31:50.042007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.764 [2024-11-26 18:31:50.042465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.764 [2024-11-26 18:31:50.042487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:56.764 [2024-11-26 18:31:50.042500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:27:56.764 [2024-11-26 18:31:50.042508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.764 [2024-11-26 18:31:50.042660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.764 [2024-11-26 18:31:50.042680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:56.764 [2024-11-26 18:31:50.042694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:27:56.764 [2024-11-26 18:31:50.042701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.764 [2024-11-26 18:31:50.063449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.764 [2024-11-26 18:31:50.063486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.764 [2024-11-26 18:31:50.063517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.760 ms 00:27:56.764 [2024-11-26 18:31:50.063524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.096958] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:57.027 [2024-11-26 18:31:50.096996] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:57.027 [2024-11-26 18:31:50.097013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.097021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:57.027 [2024-11-26 18:31:50.097033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.426 ms 00:27:57.027 [2024-11-26 18:31:50.097050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.126426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.126471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:57.027 [2024-11-26 18:31:50.126484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.353 ms 00:27:57.027 [2024-11-26 18:31:50.126492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.144694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.144733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:57.027 [2024-11-26 18:31:50.144770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.134 ms 00:27:57.027 [2024-11-26 18:31:50.144778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.162289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.162324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:57.027 [2024-11-26 18:31:50.162351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.445 ms 00:27:57.027 [2024-11-26 18:31:50.162358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.163133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.163164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:57.027 [2024-11-26 18:31:50.163176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:27:57.027 [2024-11-26 18:31:50.163184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.246109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.246174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:57.027 [2024-11-26 18:31:50.246191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.051 ms 00:27:57.027 [2024-11-26 18:31:50.246214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.257226] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:57.027 [2024-11-26 18:31:50.273208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.273280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:57.027 [2024-11-26 18:31:50.273292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.889 ms 00:27:57.027 [2024-11-26 18:31:50.273317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.273424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.273437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:57.027 [2024-11-26 18:31:50.273445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:57.027 [2024-11-26 18:31:50.273453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.273505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.273515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:57.027 [2024-11-26 18:31:50.273523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:57.027 [2024-11-26 18:31:50.273534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.273555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.273564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:57.027 [2024-11-26 18:31:50.273572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:57.027 [2024-11-26 18:31:50.273580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.273613] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:57.027 [2024-11-26 18:31:50.273626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.273655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:57.027 [2024-11-26 18:31:50.273664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:57.027 [2024-11-26 18:31:50.273690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.308000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.308040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:57.027 [2024-11-26 18:31:50.308069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.351 ms 00:27:57.027 [2024-11-26 18:31:50.308076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.308175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.027 [2024-11-26 18:31:50.308185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:57.027 [2024-11-26 18:31:50.308197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:57.027 [2024-11-26 18:31:50.308205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.027 [2024-11-26 18:31:50.309149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:57.027 [2024-11-26 18:31:50.313066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 388.966 ms, result 0 00:27:57.027 [2024-11-26 18:31:50.314264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:57.027 Some configs were skipped because the RPC state that can call them passed over. 00:27:57.290 18:31:50 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:27:57.290 [2024-11-26 18:31:50.548853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.290 [2024-11-26 18:31:50.548916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:57.290 [2024-11-26 18:31:50.548931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:27:57.290 [2024-11-26 18:31:50.548943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.290 [2024-11-26 18:31:50.548980] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.826 ms, result 0 00:27:57.290 true 00:27:57.290 18:31:50 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:27:57.550 [2024-11-26 18:31:50.760186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.550 [2024-11-26 18:31:50.760232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:27:57.550 [2024-11-26 18:31:50.760248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:27:57.550 [2024-11-26 18:31:50.760258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.550 [2024-11-26 18:31:50.760298] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.353 ms, result 0 00:27:57.550 true 00:27:57.550 18:31:50 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79607 00:27:57.550 18:31:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79607 ']' 00:27:57.550 18:31:50 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79607 00:27:57.550 18:31:50 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:27:57.550 18:31:50 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.550 18:31:50 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79607 00:27:57.550 killing process with pid 79607 00:27:57.550 18:31:50 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:57.550 18:31:50 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:57.550 18:31:50 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79607' 00:27:57.550 18:31:50 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79607 00:27:57.550 18:31:50 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79607 00:27:58.932 [2024-11-26 18:31:51.877039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.877099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:58.932 [2024-11-26 18:31:51.877111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:58.932 [2024-11-26 18:31:51.877120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.877143] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:58.932 [2024-11-26 18:31:51.881269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.881299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:58.932 [2024-11-26 18:31:51.881312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.117 ms 00:27:58.932 [2024-11-26 18:31:51.881319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.881567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.881586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:58.932 [2024-11-26 18:31:51.881596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:27:58.932 [2024-11-26 18:31:51.881604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.885118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.885157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:58.932 [2024-11-26 18:31:51.885168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.492 ms 00:27:58.932 [2024-11-26 18:31:51.885176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.890614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.890652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:58.932 [2024-11-26 18:31:51.890663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.413 ms 00:27:58.932 [2024-11-26 18:31:51.890670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.904836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.904880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:58.932 [2024-11-26 18:31:51.904910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.121 ms 00:27:58.932 [2024-11-26 18:31:51.904917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.915524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.915560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:58.932 [2024-11-26 18:31:51.915571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.577 ms 00:27:58.932 [2024-11-26 18:31:51.915578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.915724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.915735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:58.932 [2024-11-26 18:31:51.915745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:58.932 [2024-11-26 18:31:51.915752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.930277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.930309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:58.932 [2024-11-26 18:31:51.930335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.534 ms 00:27:58.932 [2024-11-26 18:31:51.930342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.944532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.944564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:58.932 [2024-11-26 18:31:51.944577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.172 ms 00:27:58.932 [2024-11-26 18:31:51.944583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.958303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.958335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:58.932 [2024-11-26 18:31:51.958346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.677 ms 00:27:58.932 [2024-11-26 18:31:51.958352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.971829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.932 [2024-11-26 18:31:51.971861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:58.932 [2024-11-26 18:31:51.971872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.420 ms 00:27:58.932 [2024-11-26 18:31:51.971877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.932 [2024-11-26 18:31:51.971942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:58.932 [2024-11-26 18:31:51.971954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:58.932 [2024-11-26 18:31:51.971967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:58.932 [2024-11-26 18:31:51.971975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:58.932 [2024-11-26 18:31:51.971984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:58.932 [2024-11-26 18:31:51.971991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:58.933 [2024-11-26 18:31:51.972741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:58.934 [2024-11-26 18:31:51.972748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:58.934 [2024-11-26 18:31:51.972765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:58.934 [2024-11-26 18:31:51.972772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:58.934 [2024-11-26 18:31:51.972781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:58.934 [2024-11-26 18:31:51.972788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:58.934 [2024-11-26 18:31:51.972797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:58.934 [2024-11-26 18:31:51.972838] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:58.934 [2024-11-26 18:31:51.972851] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b 00:27:58.934 [2024-11-26 18:31:51.972862] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:58.934 [2024-11-26 18:31:51.972871] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:58.934 [2024-11-26 18:31:51.972878] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:58.934 [2024-11-26 18:31:51.972888] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:58.934 [2024-11-26 18:31:51.972895] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:58.934 [2024-11-26 18:31:51.972904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:58.934 [2024-11-26 18:31:51.972912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:58.934 [2024-11-26 18:31:51.972920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:58.934 [2024-11-26 18:31:51.972927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:58.934 [2024-11-26 18:31:51.972936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.934 [2024-11-26 18:31:51.972944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:58.934 [2024-11-26 18:31:51.972955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:27:58.934 [2024-11-26 18:31:51.972964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.934 [2024-11-26 18:31:51.991805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.934 [2024-11-26 18:31:51.991837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:58.934 [2024-11-26 18:31:51.991850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.843 ms 00:27:58.934 [2024-11-26 18:31:51.991857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.934 [2024-11-26 18:31:51.992384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.934 [2024-11-26 18:31:51.992400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:58.934 [2024-11-26 18:31:51.992412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:27:58.934 [2024-11-26 18:31:51.992419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.934 [2024-11-26 18:31:52.056973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.934 [2024-11-26 18:31:52.057018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.934 [2024-11-26 18:31:52.057033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.934 [2024-11-26 18:31:52.057042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.934 [2024-11-26 18:31:52.057135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.934 [2024-11-26 18:31:52.057145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.934 [2024-11-26 18:31:52.057163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.934 [2024-11-26 18:31:52.057170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.934 [2024-11-26 18:31:52.057226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.934 [2024-11-26 18:31:52.057237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.934 [2024-11-26 18:31:52.057253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.934 [2024-11-26 18:31:52.057260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.934 [2024-11-26 18:31:52.057281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.934 [2024-11-26 18:31:52.057289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.934 [2024-11-26 18:31:52.057300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.934 [2024-11-26 18:31:52.057311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.934 [2024-11-26 18:31:52.176772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:58.934 [2024-11-26 18:31:52.176854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.934 [2024-11-26 18:31:52.176868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:58.934 [2024-11-26 18:31:52.176876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.194 [2024-11-26 18:31:52.271789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.194 [2024-11-26 18:31:52.271849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:59.194 [2024-11-26 18:31:52.271866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.194 [2024-11-26 18:31:52.271889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.194 [2024-11-26 18:31:52.271975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.194 [2024-11-26 18:31:52.271984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:59.194 [2024-11-26 18:31:52.271996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.194 [2024-11-26 18:31:52.272003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.194 [2024-11-26 18:31:52.272031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.194 [2024-11-26 18:31:52.272039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:59.194 [2024-11-26 18:31:52.272048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.194 [2024-11-26 18:31:52.272055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.194 [2024-11-26 18:31:52.272159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.194 [2024-11-26 18:31:52.272170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:59.194 [2024-11-26 18:31:52.272179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.194 [2024-11-26 18:31:52.272187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.194 [2024-11-26 18:31:52.272224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.194 [2024-11-26 18:31:52.272235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:59.194 [2024-11-26 18:31:52.272244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.194 [2024-11-26 18:31:52.272251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.194 [2024-11-26 18:31:52.272307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.194 [2024-11-26 18:31:52.272331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:59.194 [2024-11-26 18:31:52.272342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.194 [2024-11-26 18:31:52.272349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.194 [2024-11-26 18:31:52.272394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.194 [2024-11-26 18:31:52.272403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:59.194 [2024-11-26 18:31:52.272412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.194 [2024-11-26 18:31:52.272420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.194 [2024-11-26 18:31:52.272554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.252 ms, result 0 00:28:00.132 18:31:53 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:00.132 [2024-11-26 18:31:53.336218] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:28:00.132 [2024-11-26 18:31:53.336330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79671 ] 00:28:00.392 [2024-11-26 18:31:53.510271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.392 [2024-11-26 18:31:53.619561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.651 [2024-11-26 18:31:53.973613] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:00.651 [2024-11-26 18:31:53.973697] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:00.912 [2024-11-26 18:31:54.129393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.129447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:00.912 [2024-11-26 18:31:54.129460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:00.912 [2024-11-26 18:31:54.129468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.132308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.132340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:00.912 [2024-11-26 18:31:54.132365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.829 ms 00:28:00.912 [2024-11-26 18:31:54.132373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.132470] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:00.912 [2024-11-26 18:31:54.133487] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:00.912 [2024-11-26 18:31:54.133519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.133528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:00.912 [2024-11-26 18:31:54.133537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:28:00.912 [2024-11-26 18:31:54.133545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.135014] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:00.912 [2024-11-26 18:31:54.153632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.153670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:00.912 [2024-11-26 18:31:54.153698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.655 ms 00:28:00.912 [2024-11-26 18:31:54.153705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.153793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.153805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:00.912 [2024-11-26 18:31:54.153814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:00.912 [2024-11-26 18:31:54.153821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.160533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.160567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:00.912 [2024-11-26 18:31:54.160592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.688 ms 00:28:00.912 [2024-11-26 18:31:54.160599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.160722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.160738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:00.912 [2024-11-26 18:31:54.160746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:00.912 [2024-11-26 18:31:54.160754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.160795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.160803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:00.912 [2024-11-26 18:31:54.160810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:00.912 [2024-11-26 18:31:54.160818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.160840] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:00.912 [2024-11-26 18:31:54.165335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.165367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:00.912 [2024-11-26 18:31:54.165376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.511 ms 00:28:00.912 [2024-11-26 18:31:54.165383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.165442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.165452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:00.912 [2024-11-26 18:31:54.165460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:00.912 [2024-11-26 18:31:54.165467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.165489] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:00.912 [2024-11-26 18:31:54.165509] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:00.912 [2024-11-26 18:31:54.165540] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:00.912 [2024-11-26 18:31:54.165555] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:00.912 [2024-11-26 18:31:54.165650] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:00.912 [2024-11-26 18:31:54.165664] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:00.912 [2024-11-26 18:31:54.165674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:00.912 [2024-11-26 18:31:54.165687] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:00.912 [2024-11-26 18:31:54.165695] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:00.912 [2024-11-26 18:31:54.165703] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:00.912 [2024-11-26 18:31:54.165726] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:00.912 [2024-11-26 18:31:54.165734] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:00.912 [2024-11-26 18:31:54.165741] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:00.912 [2024-11-26 18:31:54.165748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.165756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:00.912 [2024-11-26 18:31:54.165764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:28:00.912 [2024-11-26 18:31:54.165772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.165848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.912 [2024-11-26 18:31:54.165861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:00.912 [2024-11-26 18:31:54.165868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:00.912 [2024-11-26 18:31:54.165875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.912 [2024-11-26 18:31:54.165965] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:00.912 [2024-11-26 18:31:54.165996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:00.912 [2024-11-26 18:31:54.166005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:00.912 [2024-11-26 18:31:54.166025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.912 [2024-11-26 18:31:54.166033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:00.912 [2024-11-26 18:31:54.166040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:00.912 [2024-11-26 18:31:54.166047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:00.912 [2024-11-26 18:31:54.166053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:00.912 [2024-11-26 18:31:54.166059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:00.912 [2024-11-26 18:31:54.166066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:00.912 [2024-11-26 18:31:54.166073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:00.912 [2024-11-26 18:31:54.166093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:00.912 [2024-11-26 18:31:54.166100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:00.912 [2024-11-26 18:31:54.166107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:00.912 [2024-11-26 18:31:54.166114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:00.912 [2024-11-26 18:31:54.166121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.912 [2024-11-26 18:31:54.166128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:00.912 [2024-11-26 18:31:54.166134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:00.912 [2024-11-26 18:31:54.166140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.912 [2024-11-26 18:31:54.166147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:00.912 [2024-11-26 18:31:54.166154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:00.912 [2024-11-26 18:31:54.166160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:00.912 [2024-11-26 18:31:54.166166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:00.912 [2024-11-26 18:31:54.166172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:00.912 [2024-11-26 18:31:54.166179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:00.912 [2024-11-26 18:31:54.166185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:00.912 [2024-11-26 18:31:54.166191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:00.912 [2024-11-26 18:31:54.166197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:00.912 [2024-11-26 18:31:54.166203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:00.913 [2024-11-26 18:31:54.166210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:00.913 [2024-11-26 18:31:54.166215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:00.913 [2024-11-26 18:31:54.166221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:00.913 [2024-11-26 18:31:54.166227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:00.913 [2024-11-26 18:31:54.166233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:00.913 [2024-11-26 18:31:54.166239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:00.913 [2024-11-26 18:31:54.166245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:00.913 [2024-11-26 18:31:54.166251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:00.913 [2024-11-26 18:31:54.166258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:00.913 [2024-11-26 18:31:54.166264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:00.913 [2024-11-26 18:31:54.166270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.913 [2024-11-26 18:31:54.166276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:00.913 [2024-11-26 18:31:54.166282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:00.913 [2024-11-26 18:31:54.166289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.913 [2024-11-26 18:31:54.166295] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:00.913 [2024-11-26 18:31:54.166302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:00.913 [2024-11-26 18:31:54.166313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:00.913 [2024-11-26 18:31:54.166319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:00.913 [2024-11-26 18:31:54.166327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:00.913 [2024-11-26 18:31:54.166333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:00.913 [2024-11-26 18:31:54.166340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:00.913 [2024-11-26 18:31:54.166347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:00.913 [2024-11-26 18:31:54.166352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:00.913 [2024-11-26 18:31:54.166359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:00.913 [2024-11-26 18:31:54.166367] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:00.913 [2024-11-26 18:31:54.166375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:00.913 [2024-11-26 18:31:54.166383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:00.913 [2024-11-26 18:31:54.166391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:00.913 [2024-11-26 18:31:54.166398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:00.913 [2024-11-26 18:31:54.166405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:00.913 [2024-11-26 18:31:54.166411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:00.913 [2024-11-26 18:31:54.166418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:00.913 [2024-11-26 18:31:54.166425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:00.913 [2024-11-26 18:31:54.166431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:00.913 [2024-11-26 18:31:54.166438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:00.913 [2024-11-26 18:31:54.166445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:00.913 [2024-11-26 18:31:54.166451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:00.913 [2024-11-26 18:31:54.166458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:00.913 [2024-11-26 18:31:54.166464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:00.913 [2024-11-26 18:31:54.166471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:00.913 [2024-11-26 18:31:54.166477] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:00.913 [2024-11-26 18:31:54.166485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:00.913 [2024-11-26 18:31:54.166493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:00.913 [2024-11-26 18:31:54.166500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:00.913 [2024-11-26 18:31:54.166507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:00.913 [2024-11-26 18:31:54.166515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:00.913 [2024-11-26 18:31:54.166523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.913 [2024-11-26 18:31:54.166534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:00.913 [2024-11-26 18:31:54.166541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:28:00.913 [2024-11-26 18:31:54.166548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.913 [2024-11-26 18:31:54.203480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.913 [2024-11-26 18:31:54.203541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:00.913 [2024-11-26 18:31:54.203554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.946 ms 00:28:00.913 [2024-11-26 18:31:54.203563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.913 [2024-11-26 18:31:54.203724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:00.913 [2024-11-26 18:31:54.203736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:00.913 [2024-11-26 18:31:54.203745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:00.913 [2024-11-26 18:31:54.203753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.261777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.261820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:01.173 [2024-11-26 18:31:54.261852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.114 ms 00:28:01.173 [2024-11-26 18:31:54.261860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.261977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.261987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:01.173 [2024-11-26 18:31:54.261996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:01.173 [2024-11-26 18:31:54.262003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.262442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.262462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:01.173 [2024-11-26 18:31:54.262476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:28:01.173 [2024-11-26 18:31:54.262483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.262596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.262624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:01.173 [2024-11-26 18:31:54.262633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:28:01.173 [2024-11-26 18:31:54.262641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.281462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.281501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:01.173 [2024-11-26 18:31:54.281512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.833 ms 00:28:01.173 [2024-11-26 18:31:54.281520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.300549] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:01.173 [2024-11-26 18:31:54.300586] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:01.173 [2024-11-26 18:31:54.300614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.300622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:01.173 [2024-11-26 18:31:54.300638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.011 ms 00:28:01.173 [2024-11-26 18:31:54.300645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.329418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.329463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:01.173 [2024-11-26 18:31:54.329474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.751 ms 00:28:01.173 [2024-11-26 18:31:54.329482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.347120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.347157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:01.173 [2024-11-26 18:31:54.347183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.580 ms 00:28:01.173 [2024-11-26 18:31:54.347189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.365142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.365178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:01.173 [2024-11-26 18:31:54.365204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.922 ms 00:28:01.173 [2024-11-26 18:31:54.365211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.365896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.365928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:01.173 [2024-11-26 18:31:54.365938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:28:01.173 [2024-11-26 18:31:54.365946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.448927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.449013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:01.173 [2024-11-26 18:31:54.449029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.116 ms 00:28:01.173 [2024-11-26 18:31:54.449037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.460142] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:01.173 [2024-11-26 18:31:54.475854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.475932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:01.173 [2024-11-26 18:31:54.475945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.750 ms 00:28:01.173 [2024-11-26 18:31:54.475959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.476092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.476102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:01.173 [2024-11-26 18:31:54.476111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:01.173 [2024-11-26 18:31:54.476118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.476171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.476179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:01.173 [2024-11-26 18:31:54.476186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:01.173 [2024-11-26 18:31:54.476213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.476247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.476260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:01.173 [2024-11-26 18:31:54.476268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:01.173 [2024-11-26 18:31:54.476276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-11-26 18:31:54.476311] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:01.173 [2024-11-26 18:31:54.476321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-11-26 18:31:54.476328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:01.173 [2024-11-26 18:31:54.476335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:01.173 [2024-11-26 18:31:54.476342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.432 [2024-11-26 18:31:54.511541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.432 [2024-11-26 18:31:54.511581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:01.432 [2024-11-26 18:31:54.511593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.248 ms 00:28:01.432 [2024-11-26 18:31:54.511601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.432 [2024-11-26 18:31:54.511731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.432 [2024-11-26 18:31:54.511743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:01.432 [2024-11-26 18:31:54.511751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:01.432 [2024-11-26 18:31:54.511758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.432 [2024-11-26 18:31:54.512800] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:01.432 [2024-11-26 18:31:54.517144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.779 ms, result 0 00:28:01.432 [2024-11-26 18:31:54.517927] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:01.432 [2024-11-26 18:31:54.535929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:02.372  [2024-11-26T18:31:56.647Z] Copying: 30/256 [MB] (30 MBps) [2024-11-26T18:31:57.587Z] Copying: 56/256 [MB] (26 MBps) [2024-11-26T18:31:58.967Z] Copying: 81/256 [MB] (25 MBps) [2024-11-26T18:31:59.906Z] Copying: 108/256 [MB] (26 MBps) [2024-11-26T18:32:00.845Z] Copying: 134/256 [MB] (26 MBps) [2024-11-26T18:32:01.781Z] Copying: 160/256 [MB] (25 MBps) [2024-11-26T18:32:02.722Z] Copying: 186/256 [MB] (25 MBps) [2024-11-26T18:32:03.661Z] Copying: 212/256 [MB] (26 MBps) [2024-11-26T18:32:04.230Z] Copying: 240/256 [MB] (28 MBps) [2024-11-26T18:32:04.490Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-26 18:32:04.308006] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:11.155 [2024-11-26 18:32:04.333277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.155 [2024-11-26 18:32:04.333335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:11.155 [2024-11-26 18:32:04.333359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:11.155 [2024-11-26 18:32:04.333368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.155 [2024-11-26 18:32:04.333399] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:11.155 [2024-11-26 18:32:04.337599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.155 [2024-11-26 18:32:04.337632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:11.155 [2024-11-26 18:32:04.337642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.190 ms 00:28:11.155 [2024-11-26 18:32:04.337650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.155 [2024-11-26 18:32:04.337893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.155 [2024-11-26 18:32:04.337927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:11.155 [2024-11-26 18:32:04.337935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:28:11.155 [2024-11-26 18:32:04.337943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.155 [2024-11-26 18:32:04.340697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.155 [2024-11-26 18:32:04.340719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:11.155 [2024-11-26 18:32:04.340728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.739 ms 00:28:11.155 [2024-11-26 18:32:04.340735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.156 [2024-11-26 18:32:04.346107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.156 [2024-11-26 18:32:04.346140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:11.156 [2024-11-26 18:32:04.346148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.366 ms 00:28:11.156 [2024-11-26 18:32:04.346155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.156 [2024-11-26 18:32:04.381895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.156 [2024-11-26 18:32:04.381935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:11.156 [2024-11-26 18:32:04.381947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.720 ms 00:28:11.156 [2024-11-26 18:32:04.381956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.156 [2024-11-26 18:32:04.402871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.156 [2024-11-26 18:32:04.402909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:11.156 [2024-11-26 18:32:04.402926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.901 ms 00:28:11.156 [2024-11-26 18:32:04.402934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.156 [2024-11-26 18:32:04.403064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.156 [2024-11-26 18:32:04.403076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:11.156 [2024-11-26 18:32:04.403098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:11.156 [2024-11-26 18:32:04.403106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.156 [2024-11-26 18:32:04.439180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.156 [2024-11-26 18:32:04.439222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:11.156 [2024-11-26 18:32:04.439233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.126 ms 00:28:11.156 [2024-11-26 18:32:04.439241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.156 [2024-11-26 18:32:04.474338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.156 [2024-11-26 18:32:04.474375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:11.156 [2024-11-26 18:32:04.474385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.112 ms 00:28:11.156 [2024-11-26 18:32:04.474393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.418 [2024-11-26 18:32:04.510887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.418 [2024-11-26 18:32:04.510929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:11.418 [2024-11-26 18:32:04.510941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.515 ms 00:28:11.418 [2024-11-26 18:32:04.510948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.418 [2024-11-26 18:32:04.546593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.418 [2024-11-26 18:32:04.546642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:11.418 [2024-11-26 18:32:04.546653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.609 ms 00:28:11.418 [2024-11-26 18:32:04.546661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.419 [2024-11-26 18:32:04.546710] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:11.419 [2024-11-26 18:32:04.546725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.546996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:11.419 [2024-11-26 18:32:04.547396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:11.420 [2024-11-26 18:32:04.547548] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:11.420 [2024-11-26 18:32:04.547556] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 36169abc-b0a7-4ba5-a3f2-e4fa05e6ff6b 00:28:11.420 [2024-11-26 18:32:04.547564] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:11.420 [2024-11-26 18:32:04.547571] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:11.420 [2024-11-26 18:32:04.547578] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:11.420 [2024-11-26 18:32:04.547587] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:11.420 [2024-11-26 18:32:04.547594] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:11.420 [2024-11-26 18:32:04.547605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:11.420 [2024-11-26 18:32:04.547616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:11.420 [2024-11-26 18:32:04.547622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:11.420 [2024-11-26 18:32:04.547629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:11.420 [2024-11-26 18:32:04.547636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.420 [2024-11-26 18:32:04.547653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:11.420 [2024-11-26 18:32:04.547662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:28:11.420 [2024-11-26 18:32:04.547670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.420 [2024-11-26 18:32:04.568290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.420 [2024-11-26 18:32:04.568326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:11.420 [2024-11-26 18:32:04.568336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.639 ms 00:28:11.420 [2024-11-26 18:32:04.568344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.420 [2024-11-26 18:32:04.568925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.420 [2024-11-26 18:32:04.568941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:11.420 [2024-11-26 18:32:04.568949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:28:11.420 [2024-11-26 18:32:04.568957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.420 [2024-11-26 18:32:04.625031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.420 [2024-11-26 18:32:04.625074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:11.420 [2024-11-26 18:32:04.625085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.420 [2024-11-26 18:32:04.625098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.420 [2024-11-26 18:32:04.625206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.420 [2024-11-26 18:32:04.625217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:11.420 [2024-11-26 18:32:04.625225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.420 [2024-11-26 18:32:04.625233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.420 [2024-11-26 18:32:04.625286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.420 [2024-11-26 18:32:04.625298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:11.420 [2024-11-26 18:32:04.625305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.420 [2024-11-26 18:32:04.625313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.420 [2024-11-26 18:32:04.625334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.420 [2024-11-26 18:32:04.625342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:11.420 [2024-11-26 18:32:04.625349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.420 [2024-11-26 18:32:04.625357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.694 [2024-11-26 18:32:04.751553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.694 [2024-11-26 18:32:04.751620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:11.694 [2024-11-26 18:32:04.751649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.694 [2024-11-26 18:32:04.751658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.694 [2024-11-26 18:32:04.851292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.694 [2024-11-26 18:32:04.851351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:11.694 [2024-11-26 18:32:04.851363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.694 [2024-11-26 18:32:04.851387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.694 [2024-11-26 18:32:04.851456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.694 [2024-11-26 18:32:04.851465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:11.694 [2024-11-26 18:32:04.851473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.694 [2024-11-26 18:32:04.851481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.694 [2024-11-26 18:32:04.851507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.694 [2024-11-26 18:32:04.851521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:11.694 [2024-11-26 18:32:04.851529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.694 [2024-11-26 18:32:04.851536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.694 [2024-11-26 18:32:04.851662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.694 [2024-11-26 18:32:04.851692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:11.694 [2024-11-26 18:32:04.851702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.694 [2024-11-26 18:32:04.851709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.694 [2024-11-26 18:32:04.851748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.694 [2024-11-26 18:32:04.851758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:11.694 [2024-11-26 18:32:04.851769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.694 [2024-11-26 18:32:04.851777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.695 [2024-11-26 18:32:04.851818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.695 [2024-11-26 18:32:04.851828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:11.695 [2024-11-26 18:32:04.851835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.695 [2024-11-26 18:32:04.851842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.695 [2024-11-26 18:32:04.851883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.695 [2024-11-26 18:32:04.851914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:11.695 [2024-11-26 18:32:04.851922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.695 [2024-11-26 18:32:04.851930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.695 [2024-11-26 18:32:04.852071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 519.802 ms, result 0 00:28:12.650 00:28:12.650 00:28:12.650 18:32:05 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:13.220 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:28:13.220 18:32:06 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:28:13.220 18:32:06 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:28:13.220 18:32:06 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:13.220 18:32:06 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:13.220 18:32:06 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:28:13.220 18:32:06 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:13.220 Process with pid 79607 is not found 00:28:13.220 18:32:06 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79607 00:28:13.220 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79607 ']' 00:28:13.220 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79607 00:28:13.220 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79607) - No such process 00:28:13.220 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79607 is not found' 00:28:13.220 00:28:13.220 real 1m11.296s 00:28:13.220 user 1m44.596s 00:28:13.220 sys 0m6.366s 00:28:13.220 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.220 18:32:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:13.220 ************************************ 00:28:13.220 END TEST ftl_trim 00:28:13.220 ************************************ 00:28:13.220 18:32:06 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:13.220 18:32:06 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:13.220 18:32:06 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.220 18:32:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:13.220 ************************************ 00:28:13.220 START TEST ftl_restore 00:28:13.220 ************************************ 00:28:13.220 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:13.480 * Looking for test storage... 00:28:13.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:13.480 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:13.480 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:28:13.480 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.481 18:32:06 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:13.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.481 --rc genhtml_branch_coverage=1 00:28:13.481 --rc genhtml_function_coverage=1 00:28:13.481 --rc genhtml_legend=1 00:28:13.481 --rc geninfo_all_blocks=1 00:28:13.481 --rc geninfo_unexecuted_blocks=1 00:28:13.481 00:28:13.481 ' 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:13.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.481 --rc genhtml_branch_coverage=1 00:28:13.481 --rc genhtml_function_coverage=1 00:28:13.481 --rc genhtml_legend=1 00:28:13.481 --rc geninfo_all_blocks=1 00:28:13.481 --rc geninfo_unexecuted_blocks=1 00:28:13.481 00:28:13.481 ' 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:13.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.481 --rc genhtml_branch_coverage=1 00:28:13.481 --rc genhtml_function_coverage=1 00:28:13.481 --rc genhtml_legend=1 00:28:13.481 --rc geninfo_all_blocks=1 00:28:13.481 --rc geninfo_unexecuted_blocks=1 00:28:13.481 00:28:13.481 ' 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:13.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.481 --rc genhtml_branch_coverage=1 00:28:13.481 --rc genhtml_function_coverage=1 00:28:13.481 --rc genhtml_legend=1 00:28:13.481 --rc geninfo_all_blocks=1 00:28:13.481 --rc geninfo_unexecuted_blocks=1 00:28:13.481 00:28:13.481 ' 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.64DvkW7l4c 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79872 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:13.481 18:32:06 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79872 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79872 ']' 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.481 18:32:06 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:13.740 [2024-11-26 18:32:06.857548] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:28:13.740 [2024-11-26 18:32:06.857670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79872 ] 00:28:13.740 [2024-11-26 18:32:07.033100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.999 [2024-11-26 18:32:07.143266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.938 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.938 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:28:14.938 18:32:08 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:14.938 18:32:08 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:28:14.938 18:32:08 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:14.938 18:32:08 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:28:14.938 18:32:08 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:28:14.938 18:32:08 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:15.199 18:32:08 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:15.199 18:32:08 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:28:15.199 18:32:08 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:15.199 { 00:28:15.199 "name": "nvme0n1", 00:28:15.199 "aliases": [ 00:28:15.199 "ed6041c5-18f0-4fae-846f-9be58ed9f64b" 00:28:15.199 ], 00:28:15.199 "product_name": "NVMe disk", 00:28:15.199 "block_size": 4096, 00:28:15.199 "num_blocks": 1310720, 00:28:15.199 "uuid": "ed6041c5-18f0-4fae-846f-9be58ed9f64b", 00:28:15.199 "numa_id": -1, 00:28:15.199 "assigned_rate_limits": { 00:28:15.199 "rw_ios_per_sec": 0, 00:28:15.199 "rw_mbytes_per_sec": 0, 00:28:15.199 "r_mbytes_per_sec": 0, 00:28:15.199 "w_mbytes_per_sec": 0 00:28:15.199 }, 00:28:15.199 "claimed": true, 00:28:15.199 "claim_type": "read_many_write_one", 00:28:15.199 "zoned": false, 00:28:15.199 "supported_io_types": { 00:28:15.199 "read": true, 00:28:15.199 "write": true, 00:28:15.199 "unmap": true, 00:28:15.199 "flush": true, 00:28:15.199 "reset": true, 00:28:15.199 "nvme_admin": true, 00:28:15.199 "nvme_io": true, 00:28:15.199 "nvme_io_md": false, 00:28:15.199 "write_zeroes": true, 00:28:15.199 "zcopy": false, 00:28:15.199 "get_zone_info": false, 00:28:15.199 "zone_management": false, 00:28:15.199 "zone_append": false, 00:28:15.199 "compare": true, 00:28:15.199 "compare_and_write": false, 00:28:15.199 "abort": true, 00:28:15.199 "seek_hole": false, 00:28:15.199 "seek_data": false, 00:28:15.199 "copy": true, 00:28:15.199 "nvme_iov_md": false 00:28:15.199 }, 00:28:15.199 "driver_specific": { 00:28:15.199 "nvme": [ 00:28:15.199 { 00:28:15.199 "pci_address": "0000:00:11.0", 00:28:15.199 "trid": { 00:28:15.199 "trtype": "PCIe", 00:28:15.199 "traddr": "0000:00:11.0" 00:28:15.199 }, 00:28:15.199 "ctrlr_data": { 00:28:15.199 "cntlid": 0, 00:28:15.199 "vendor_id": "0x1b36", 00:28:15.199 "model_number": "QEMU NVMe Ctrl", 00:28:15.199 "serial_number": "12341", 00:28:15.199 "firmware_revision": "8.0.0", 00:28:15.199 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:15.199 "oacs": { 00:28:15.199 "security": 0, 00:28:15.199 "format": 1, 00:28:15.199 "firmware": 0, 00:28:15.199 "ns_manage": 1 00:28:15.199 }, 00:28:15.199 "multi_ctrlr": false, 00:28:15.199 "ana_reporting": false 00:28:15.199 }, 00:28:15.199 "vs": { 00:28:15.199 "nvme_version": "1.4" 00:28:15.199 }, 00:28:15.199 "ns_data": { 00:28:15.199 "id": 1, 00:28:15.199 "can_share": false 00:28:15.199 } 00:28:15.199 } 00:28:15.199 ], 00:28:15.199 "mp_policy": "active_passive" 00:28:15.199 } 00:28:15.199 } 00:28:15.199 ]' 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:15.199 18:32:08 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:28:15.199 18:32:08 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:28:15.199 18:32:08 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:15.199 18:32:08 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:28:15.459 18:32:08 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:15.459 18:32:08 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:15.459 18:32:08 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=786c9a70-1f13-44ae-bd8e-ab77ddfcc3a0 00:28:15.459 18:32:08 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:28:15.459 18:32:08 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 786c9a70-1f13-44ae-bd8e-ab77ddfcc3a0 00:28:15.719 18:32:08 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:15.979 18:32:09 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b2752b9c-73b9-4919-b206-df22f8045165 00:28:15.979 18:32:09 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b2752b9c-73b9-4919-b206-df22f8045165 00:28:16.239 18:32:09 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:16.239 18:32:09 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:16.239 18:32:09 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:16.239 18:32:09 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:28:16.239 18:32:09 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:16.239 18:32:09 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:16.239 18:32:09 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:28:16.239 18:32:09 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:16.239 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:16.239 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:16.239 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:16.239 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:16.239 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:16.239 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:16.239 { 00:28:16.239 "name": "a34af880-91f2-4ad4-b6d8-22db652c1d7b", 00:28:16.239 "aliases": [ 00:28:16.239 "lvs/nvme0n1p0" 00:28:16.239 ], 00:28:16.239 "product_name": "Logical Volume", 00:28:16.239 "block_size": 4096, 00:28:16.239 "num_blocks": 26476544, 00:28:16.239 "uuid": "a34af880-91f2-4ad4-b6d8-22db652c1d7b", 00:28:16.239 "assigned_rate_limits": { 00:28:16.239 "rw_ios_per_sec": 0, 00:28:16.239 "rw_mbytes_per_sec": 0, 00:28:16.239 "r_mbytes_per_sec": 0, 00:28:16.239 "w_mbytes_per_sec": 0 00:28:16.239 }, 00:28:16.239 "claimed": false, 00:28:16.239 "zoned": false, 00:28:16.239 "supported_io_types": { 00:28:16.239 "read": true, 00:28:16.239 "write": true, 00:28:16.239 "unmap": true, 00:28:16.239 "flush": false, 00:28:16.239 "reset": true, 00:28:16.239 "nvme_admin": false, 00:28:16.239 "nvme_io": false, 00:28:16.239 "nvme_io_md": false, 00:28:16.239 "write_zeroes": true, 00:28:16.239 "zcopy": false, 00:28:16.239 "get_zone_info": false, 00:28:16.239 "zone_management": false, 00:28:16.239 "zone_append": false, 00:28:16.239 "compare": false, 00:28:16.239 "compare_and_write": false, 00:28:16.239 "abort": false, 00:28:16.239 "seek_hole": true, 00:28:16.239 "seek_data": true, 00:28:16.239 "copy": false, 00:28:16.239 "nvme_iov_md": false 00:28:16.239 }, 00:28:16.239 "driver_specific": { 00:28:16.239 "lvol": { 00:28:16.239 "lvol_store_uuid": "b2752b9c-73b9-4919-b206-df22f8045165", 00:28:16.239 "base_bdev": "nvme0n1", 00:28:16.239 "thin_provision": true, 00:28:16.239 "num_allocated_clusters": 0, 00:28:16.239 "snapshot": false, 00:28:16.239 "clone": false, 00:28:16.239 "esnap_clone": false 00:28:16.239 } 00:28:16.239 } 00:28:16.239 } 00:28:16.239 ]' 00:28:16.239 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:16.239 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:16.239 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:16.500 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:16.500 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:16.500 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:16.500 18:32:09 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:28:16.500 18:32:09 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:28:16.500 18:32:09 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:16.760 18:32:09 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:16.760 18:32:09 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:16.760 18:32:09 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:16.760 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:16.760 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:16.760 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:16.760 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:16.760 18:32:09 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:16.760 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:16.760 { 00:28:16.760 "name": "a34af880-91f2-4ad4-b6d8-22db652c1d7b", 00:28:16.760 "aliases": [ 00:28:16.760 "lvs/nvme0n1p0" 00:28:16.760 ], 00:28:16.760 "product_name": "Logical Volume", 00:28:16.760 "block_size": 4096, 00:28:16.760 "num_blocks": 26476544, 00:28:16.760 "uuid": "a34af880-91f2-4ad4-b6d8-22db652c1d7b", 00:28:16.760 "assigned_rate_limits": { 00:28:16.760 "rw_ios_per_sec": 0, 00:28:16.760 "rw_mbytes_per_sec": 0, 00:28:16.760 "r_mbytes_per_sec": 0, 00:28:16.760 "w_mbytes_per_sec": 0 00:28:16.760 }, 00:28:16.760 "claimed": false, 00:28:16.760 "zoned": false, 00:28:16.760 "supported_io_types": { 00:28:16.760 "read": true, 00:28:16.760 "write": true, 00:28:16.760 "unmap": true, 00:28:16.760 "flush": false, 00:28:16.760 "reset": true, 00:28:16.760 "nvme_admin": false, 00:28:16.760 "nvme_io": false, 00:28:16.760 "nvme_io_md": false, 00:28:16.760 "write_zeroes": true, 00:28:16.760 "zcopy": false, 00:28:16.760 "get_zone_info": false, 00:28:16.760 "zone_management": false, 00:28:16.760 "zone_append": false, 00:28:16.760 "compare": false, 00:28:16.760 "compare_and_write": false, 00:28:16.760 "abort": false, 00:28:16.760 "seek_hole": true, 00:28:16.760 "seek_data": true, 00:28:16.760 "copy": false, 00:28:16.760 "nvme_iov_md": false 00:28:16.760 }, 00:28:16.760 "driver_specific": { 00:28:16.760 "lvol": { 00:28:16.760 "lvol_store_uuid": "b2752b9c-73b9-4919-b206-df22f8045165", 00:28:16.760 "base_bdev": "nvme0n1", 00:28:16.760 "thin_provision": true, 00:28:16.760 "num_allocated_clusters": 0, 00:28:16.760 "snapshot": false, 00:28:16.760 "clone": false, 00:28:16.760 "esnap_clone": false 00:28:16.760 } 00:28:16.760 } 00:28:16.760 } 00:28:16.760 ]' 00:28:16.760 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:17.020 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:17.020 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:17.020 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:17.020 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:17.020 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:17.020 18:32:10 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:28:17.020 18:32:10 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:17.020 18:32:10 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:17.020 18:32:10 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:17.020 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:17.020 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:17.020 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:28:17.020 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:28:17.281 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a34af880-91f2-4ad4-b6d8-22db652c1d7b 00:28:17.281 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:17.281 { 00:28:17.281 "name": "a34af880-91f2-4ad4-b6d8-22db652c1d7b", 00:28:17.281 "aliases": [ 00:28:17.281 "lvs/nvme0n1p0" 00:28:17.281 ], 00:28:17.281 "product_name": "Logical Volume", 00:28:17.281 "block_size": 4096, 00:28:17.281 "num_blocks": 26476544, 00:28:17.281 "uuid": "a34af880-91f2-4ad4-b6d8-22db652c1d7b", 00:28:17.281 "assigned_rate_limits": { 00:28:17.281 "rw_ios_per_sec": 0, 00:28:17.281 "rw_mbytes_per_sec": 0, 00:28:17.281 "r_mbytes_per_sec": 0, 00:28:17.281 "w_mbytes_per_sec": 0 00:28:17.281 }, 00:28:17.281 "claimed": false, 00:28:17.281 "zoned": false, 00:28:17.281 "supported_io_types": { 00:28:17.281 "read": true, 00:28:17.281 "write": true, 00:28:17.281 "unmap": true, 00:28:17.281 "flush": false, 00:28:17.281 "reset": true, 00:28:17.281 "nvme_admin": false, 00:28:17.281 "nvme_io": false, 00:28:17.281 "nvme_io_md": false, 00:28:17.281 "write_zeroes": true, 00:28:17.281 "zcopy": false, 00:28:17.281 "get_zone_info": false, 00:28:17.281 "zone_management": false, 00:28:17.281 "zone_append": false, 00:28:17.281 "compare": false, 00:28:17.281 "compare_and_write": false, 00:28:17.281 "abort": false, 00:28:17.281 "seek_hole": true, 00:28:17.281 "seek_data": true, 00:28:17.281 "copy": false, 00:28:17.281 "nvme_iov_md": false 00:28:17.281 }, 00:28:17.281 "driver_specific": { 00:28:17.281 "lvol": { 00:28:17.281 "lvol_store_uuid": "b2752b9c-73b9-4919-b206-df22f8045165", 00:28:17.281 "base_bdev": "nvme0n1", 00:28:17.281 "thin_provision": true, 00:28:17.281 "num_allocated_clusters": 0, 00:28:17.281 "snapshot": false, 00:28:17.281 "clone": false, 00:28:17.281 "esnap_clone": false 00:28:17.281 } 00:28:17.281 } 00:28:17.281 } 00:28:17.281 ]' 00:28:17.281 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:17.281 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:28:17.281 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:17.543 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:17.543 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:17.543 18:32:10 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:28:17.543 18:32:10 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:17.543 18:32:10 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a34af880-91f2-4ad4-b6d8-22db652c1d7b --l2p_dram_limit 10' 00:28:17.543 18:32:10 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:17.543 18:32:10 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:17.543 18:32:10 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:17.543 18:32:10 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:28:17.543 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:28:17.543 18:32:10 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a34af880-91f2-4ad4-b6d8-22db652c1d7b --l2p_dram_limit 10 -c nvc0n1p0 00:28:17.543 [2024-11-26 18:32:10.815725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.543 [2024-11-26 18:32:10.815794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:17.543 [2024-11-26 18:32:10.815812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:17.543 [2024-11-26 18:32:10.815820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.543 [2024-11-26 18:32:10.815889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.543 [2024-11-26 18:32:10.815900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:17.543 [2024-11-26 18:32:10.815910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:17.543 [2024-11-26 18:32:10.815917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.543 [2024-11-26 18:32:10.815938] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:17.543 [2024-11-26 18:32:10.816931] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:17.543 [2024-11-26 18:32:10.816974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.543 [2024-11-26 18:32:10.816983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:17.543 [2024-11-26 18:32:10.816994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:28:17.543 [2024-11-26 18:32:10.817001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.543 [2024-11-26 18:32:10.817094] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8ed23b79-c687-4115-982f-0da7fc18820e 00:28:17.543 [2024-11-26 18:32:10.818526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.543 [2024-11-26 18:32:10.818557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:17.543 [2024-11-26 18:32:10.818583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:17.543 [2024-11-26 18:32:10.818592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.543 [2024-11-26 18:32:10.826055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.543 [2024-11-26 18:32:10.826106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:17.543 [2024-11-26 18:32:10.826115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.428 ms 00:28:17.543 [2024-11-26 18:32:10.826124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.543 [2024-11-26 18:32:10.826230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.543 [2024-11-26 18:32:10.826246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:17.543 [2024-11-26 18:32:10.826254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:17.543 [2024-11-26 18:32:10.826266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.543 [2024-11-26 18:32:10.826339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.543 [2024-11-26 18:32:10.826351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:17.543 [2024-11-26 18:32:10.826361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:17.543 [2024-11-26 18:32:10.826369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.543 [2024-11-26 18:32:10.826391] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:17.543 [2024-11-26 18:32:10.831241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.543 [2024-11-26 18:32:10.831271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:17.543 [2024-11-26 18:32:10.831299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.864 ms 00:28:17.544 [2024-11-26 18:32:10.831306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.544 [2024-11-26 18:32:10.831336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.544 [2024-11-26 18:32:10.831344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:17.544 [2024-11-26 18:32:10.831353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:17.544 [2024-11-26 18:32:10.831360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.544 [2024-11-26 18:32:10.831398] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:17.544 [2024-11-26 18:32:10.831515] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:17.544 [2024-11-26 18:32:10.831546] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:17.544 [2024-11-26 18:32:10.831557] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:17.544 [2024-11-26 18:32:10.831568] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:17.544 [2024-11-26 18:32:10.831577] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:17.544 [2024-11-26 18:32:10.831586] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:17.544 [2024-11-26 18:32:10.831595] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:17.544 [2024-11-26 18:32:10.831604] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:17.544 [2024-11-26 18:32:10.831611] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:17.544 [2024-11-26 18:32:10.831620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.544 [2024-11-26 18:32:10.831639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:17.544 [2024-11-26 18:32:10.831659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:28:17.544 [2024-11-26 18:32:10.831667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.544 [2024-11-26 18:32:10.831760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.544 [2024-11-26 18:32:10.831778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:17.544 [2024-11-26 18:32:10.831789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:28:17.544 [2024-11-26 18:32:10.831796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.544 [2024-11-26 18:32:10.831894] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:17.544 [2024-11-26 18:32:10.831913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:17.544 [2024-11-26 18:32:10.831923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:17.544 [2024-11-26 18:32:10.831931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.544 [2024-11-26 18:32:10.831940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:17.544 [2024-11-26 18:32:10.831947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:17.544 [2024-11-26 18:32:10.831957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:17.544 [2024-11-26 18:32:10.831964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:17.544 [2024-11-26 18:32:10.831973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:17.544 [2024-11-26 18:32:10.831980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:17.544 [2024-11-26 18:32:10.831988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:17.544 [2024-11-26 18:32:10.831995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:17.544 [2024-11-26 18:32:10.832003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:17.544 [2024-11-26 18:32:10.832010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:17.544 [2024-11-26 18:32:10.832019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:17.544 [2024-11-26 18:32:10.832025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.544 [2024-11-26 18:32:10.832036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:17.544 [2024-11-26 18:32:10.832043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:17.544 [2024-11-26 18:32:10.832051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.544 [2024-11-26 18:32:10.832058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:17.544 [2024-11-26 18:32:10.832066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:17.544 [2024-11-26 18:32:10.832072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:17.544 [2024-11-26 18:32:10.832080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:17.544 [2024-11-26 18:32:10.832087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:17.544 [2024-11-26 18:32:10.832095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:17.544 [2024-11-26 18:32:10.832101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:17.544 [2024-11-26 18:32:10.832109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:17.544 [2024-11-26 18:32:10.832115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:17.544 [2024-11-26 18:32:10.832123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:17.544 [2024-11-26 18:32:10.832130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:17.544 [2024-11-26 18:32:10.832138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:17.544 [2024-11-26 18:32:10.832144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:17.544 [2024-11-26 18:32:10.832153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:17.544 [2024-11-26 18:32:10.832160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:17.544 [2024-11-26 18:32:10.832168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:17.544 [2024-11-26 18:32:10.832174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:17.544 [2024-11-26 18:32:10.832183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:17.544 [2024-11-26 18:32:10.832189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:17.544 [2024-11-26 18:32:10.832198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:17.544 [2024-11-26 18:32:10.832205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.544 [2024-11-26 18:32:10.832213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:17.544 [2024-11-26 18:32:10.832220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:17.544 [2024-11-26 18:32:10.832228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.544 [2024-11-26 18:32:10.832235] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:17.544 [2024-11-26 18:32:10.832245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:17.544 [2024-11-26 18:32:10.832253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:17.544 [2024-11-26 18:32:10.832262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:17.544 [2024-11-26 18:32:10.832270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:17.544 [2024-11-26 18:32:10.832281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:17.544 [2024-11-26 18:32:10.832287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:17.544 [2024-11-26 18:32:10.832296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:17.544 [2024-11-26 18:32:10.832302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:17.544 [2024-11-26 18:32:10.832311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:17.544 [2024-11-26 18:32:10.832321] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:17.544 [2024-11-26 18:32:10.832334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:17.544 [2024-11-26 18:32:10.832343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:17.544 [2024-11-26 18:32:10.832352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:17.544 [2024-11-26 18:32:10.832359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:17.544 [2024-11-26 18:32:10.832368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:17.544 [2024-11-26 18:32:10.832375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:17.544 [2024-11-26 18:32:10.832385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:17.544 [2024-11-26 18:32:10.832392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:17.544 [2024-11-26 18:32:10.832401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:17.544 [2024-11-26 18:32:10.832409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:17.544 [2024-11-26 18:32:10.832420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:17.544 [2024-11-26 18:32:10.832428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:17.544 [2024-11-26 18:32:10.832438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:17.544 [2024-11-26 18:32:10.832445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:17.544 [2024-11-26 18:32:10.832454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:17.544 [2024-11-26 18:32:10.832461] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:17.544 [2024-11-26 18:32:10.832471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:17.544 [2024-11-26 18:32:10.832480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:17.544 [2024-11-26 18:32:10.832488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:17.544 [2024-11-26 18:32:10.832496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:17.544 [2024-11-26 18:32:10.832505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:17.545 [2024-11-26 18:32:10.832512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:17.545 [2024-11-26 18:32:10.832522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:17.545 [2024-11-26 18:32:10.832530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:28:17.545 [2024-11-26 18:32:10.832538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:17.545 [2024-11-26 18:32:10.832578] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:17.545 [2024-11-26 18:32:10.832592] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:21.740 [2024-11-26 18:32:14.535012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.535078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:21.740 [2024-11-26 18:32:14.535092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3709.573 ms 00:28:21.740 [2024-11-26 18:32:14.535102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.570967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.571028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:21.740 [2024-11-26 18:32:14.571042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.619 ms 00:28:21.740 [2024-11-26 18:32:14.571053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.571216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.571230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:21.740 [2024-11-26 18:32:14.571238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:21.740 [2024-11-26 18:32:14.571252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.615967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.616020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:21.740 [2024-11-26 18:32:14.616032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.766 ms 00:28:21.740 [2024-11-26 18:32:14.616042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.616087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.616098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:21.740 [2024-11-26 18:32:14.616106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:21.740 [2024-11-26 18:32:14.616127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.616586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.616609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:21.740 [2024-11-26 18:32:14.616627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:28:21.740 [2024-11-26 18:32:14.616637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.616727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.616746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:21.740 [2024-11-26 18:32:14.616754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:21.740 [2024-11-26 18:32:14.616765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.635169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.635238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:21.740 [2024-11-26 18:32:14.635251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.415 ms 00:28:21.740 [2024-11-26 18:32:14.635260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.659437] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:21.740 [2024-11-26 18:32:14.662812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.662848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:21.740 [2024-11-26 18:32:14.662863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.495 ms 00:28:21.740 [2024-11-26 18:32:14.662873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.753561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.753631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:21.740 [2024-11-26 18:32:14.753663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.818 ms 00:28:21.740 [2024-11-26 18:32:14.753671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.753857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.753868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:21.740 [2024-11-26 18:32:14.753881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:28:21.740 [2024-11-26 18:32:14.753888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.789302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.789344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:21.740 [2024-11-26 18:32:14.789358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.436 ms 00:28:21.740 [2024-11-26 18:32:14.789366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.823858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.823896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:21.740 [2024-11-26 18:32:14.823925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.498 ms 00:28:21.740 [2024-11-26 18:32:14.823933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.824560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.824594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:21.740 [2024-11-26 18:32:14.824608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:28:21.740 [2024-11-26 18:32:14.824622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.926258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.926315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:21.740 [2024-11-26 18:32:14.926333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.781 ms 00:28:21.740 [2024-11-26 18:32:14.926356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.960793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.960837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:21.740 [2024-11-26 18:32:14.960869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.424 ms 00:28:21.740 [2024-11-26 18:32:14.960877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:14.994619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.740 [2024-11-26 18:32:14.994663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:21.740 [2024-11-26 18:32:14.994677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.758 ms 00:28:21.740 [2024-11-26 18:32:14.994684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.740 [2024-11-26 18:32:15.029410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.741 [2024-11-26 18:32:15.029449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:21.741 [2024-11-26 18:32:15.029478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.735 ms 00:28:21.741 [2024-11-26 18:32:15.029485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.741 [2024-11-26 18:32:15.029527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.741 [2024-11-26 18:32:15.029537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:21.741 [2024-11-26 18:32:15.029559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:21.741 [2024-11-26 18:32:15.029566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.741 [2024-11-26 18:32:15.029670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.741 [2024-11-26 18:32:15.029683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:21.741 [2024-11-26 18:32:15.029693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:21.741 [2024-11-26 18:32:15.029700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.741 [2024-11-26 18:32:15.030745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4222.674 ms, result 0 00:28:21.741 { 00:28:21.741 "name": "ftl0", 00:28:21.741 "uuid": "8ed23b79-c687-4115-982f-0da7fc18820e" 00:28:21.741 } 00:28:21.741 18:32:15 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:21.741 18:32:15 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:22.001 18:32:15 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:28:22.001 18:32:15 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:22.261 [2024-11-26 18:32:15.445309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.261 [2024-11-26 18:32:15.445367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:22.261 [2024-11-26 18:32:15.445381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:22.261 [2024-11-26 18:32:15.445391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.261 [2024-11-26 18:32:15.445415] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:22.261 [2024-11-26 18:32:15.449588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.261 [2024-11-26 18:32:15.449625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:22.261 [2024-11-26 18:32:15.449637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.163 ms 00:28:22.261 [2024-11-26 18:32:15.449645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.261 [2024-11-26 18:32:15.449901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.261 [2024-11-26 18:32:15.449920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:22.261 [2024-11-26 18:32:15.449931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:28:22.261 [2024-11-26 18:32:15.449938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.261 [2024-11-26 18:32:15.452402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.261 [2024-11-26 18:32:15.452443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:22.261 [2024-11-26 18:32:15.452457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.452 ms 00:28:22.261 [2024-11-26 18:32:15.452465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.261 [2024-11-26 18:32:15.457376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.261 [2024-11-26 18:32:15.457411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:22.261 [2024-11-26 18:32:15.457422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.894 ms 00:28:22.261 [2024-11-26 18:32:15.457428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.261 [2024-11-26 18:32:15.492419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.261 [2024-11-26 18:32:15.492461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:22.261 [2024-11-26 18:32:15.492475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.992 ms 00:28:22.261 [2024-11-26 18:32:15.492482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.261 [2024-11-26 18:32:15.512542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.261 [2024-11-26 18:32:15.512578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:22.261 [2024-11-26 18:32:15.512607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.039 ms 00:28:22.261 [2024-11-26 18:32:15.512615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.261 [2024-11-26 18:32:15.512761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.261 [2024-11-26 18:32:15.512779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:22.261 [2024-11-26 18:32:15.512790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:28:22.261 [2024-11-26 18:32:15.512797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.261 [2024-11-26 18:32:15.547453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.261 [2024-11-26 18:32:15.547488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:22.261 [2024-11-26 18:32:15.547516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.702 ms 00:28:22.261 [2024-11-26 18:32:15.547523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.261 [2024-11-26 18:32:15.581671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.262 [2024-11-26 18:32:15.581707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:22.262 [2024-11-26 18:32:15.581736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.173 ms 00:28:22.262 [2024-11-26 18:32:15.581743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.522 [2024-11-26 18:32:15.616170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.522 [2024-11-26 18:32:15.616206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:22.522 [2024-11-26 18:32:15.616218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.451 ms 00:28:22.522 [2024-11-26 18:32:15.616224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.522 [2024-11-26 18:32:15.650775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.522 [2024-11-26 18:32:15.650811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:22.522 [2024-11-26 18:32:15.650824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.516 ms 00:28:22.522 [2024-11-26 18:32:15.650830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.522 [2024-11-26 18:32:15.650883] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:22.522 [2024-11-26 18:32:15.650897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.650993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:22.522 [2024-11-26 18:32:15.651502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:22.523 [2024-11-26 18:32:15.651739] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:22.523 [2024-11-26 18:32:15.651748] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ed23b79-c687-4115-982f-0da7fc18820e 00:28:22.523 [2024-11-26 18:32:15.651755] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:22.523 [2024-11-26 18:32:15.651766] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:22.523 [2024-11-26 18:32:15.651774] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:22.523 [2024-11-26 18:32:15.651784] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:22.523 [2024-11-26 18:32:15.651791] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:22.523 [2024-11-26 18:32:15.651799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:22.523 [2024-11-26 18:32:15.651806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:22.523 [2024-11-26 18:32:15.651813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:22.523 [2024-11-26 18:32:15.651820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:22.523 [2024-11-26 18:32:15.651829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.523 [2024-11-26 18:32:15.651836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:22.523 [2024-11-26 18:32:15.651845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:28:22.523 [2024-11-26 18:32:15.651854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.523 [2024-11-26 18:32:15.671300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.523 [2024-11-26 18:32:15.671334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:22.523 [2024-11-26 18:32:15.671346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.437 ms 00:28:22.523 [2024-11-26 18:32:15.671353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.523 [2024-11-26 18:32:15.671846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.523 [2024-11-26 18:32:15.671862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:22.523 [2024-11-26 18:32:15.671874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:28:22.523 [2024-11-26 18:32:15.671881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.523 [2024-11-26 18:32:15.734300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.523 [2024-11-26 18:32:15.734342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:22.523 [2024-11-26 18:32:15.734372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.523 [2024-11-26 18:32:15.734380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.523 [2024-11-26 18:32:15.734441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.523 [2024-11-26 18:32:15.734449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:22.523 [2024-11-26 18:32:15.734461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.523 [2024-11-26 18:32:15.734468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.523 [2024-11-26 18:32:15.734549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.523 [2024-11-26 18:32:15.734560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:22.523 [2024-11-26 18:32:15.734570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.523 [2024-11-26 18:32:15.734577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.523 [2024-11-26 18:32:15.734600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.523 [2024-11-26 18:32:15.734609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:22.523 [2024-11-26 18:32:15.734618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.523 [2024-11-26 18:32:15.734626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.523 [2024-11-26 18:32:15.853676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.523 [2024-11-26 18:32:15.853756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:22.523 [2024-11-26 18:32:15.853771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.523 [2024-11-26 18:32:15.853779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.783 [2024-11-26 18:32:15.951758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.783 [2024-11-26 18:32:15.951821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:22.783 [2024-11-26 18:32:15.951838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.783 [2024-11-26 18:32:15.951845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.783 [2024-11-26 18:32:15.951959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.783 [2024-11-26 18:32:15.951970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:22.783 [2024-11-26 18:32:15.951979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.783 [2024-11-26 18:32:15.951986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.783 [2024-11-26 18:32:15.952031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.783 [2024-11-26 18:32:15.952040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:22.783 [2024-11-26 18:32:15.952050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.783 [2024-11-26 18:32:15.952057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.783 [2024-11-26 18:32:15.952170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.783 [2024-11-26 18:32:15.952202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:22.783 [2024-11-26 18:32:15.952212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.783 [2024-11-26 18:32:15.952219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.783 [2024-11-26 18:32:15.952257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.783 [2024-11-26 18:32:15.952267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:22.783 [2024-11-26 18:32:15.952276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.783 [2024-11-26 18:32:15.952284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.783 [2024-11-26 18:32:15.952327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.783 [2024-11-26 18:32:15.952340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:22.784 [2024-11-26 18:32:15.952349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.784 [2024-11-26 18:32:15.952356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.784 [2024-11-26 18:32:15.952401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.784 [2024-11-26 18:32:15.952413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:22.784 [2024-11-26 18:32:15.952423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.784 [2024-11-26 18:32:15.952430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.784 [2024-11-26 18:32:15.952556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 508.194 ms, result 0 00:28:22.784 true 00:28:22.784 18:32:15 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79872 00:28:22.784 18:32:15 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79872 ']' 00:28:22.784 18:32:15 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79872 00:28:22.784 18:32:15 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:28:22.784 18:32:15 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.784 18:32:15 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79872 00:28:22.784 18:32:16 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.784 18:32:16 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.784 killing process with pid 79872 00:28:22.784 18:32:16 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79872' 00:28:22.784 18:32:16 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79872 00:28:22.784 18:32:16 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79872 00:28:29.369 18:32:22 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:33.564 262144+0 records in 00:28:33.564 262144+0 records out 00:28:33.564 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.59355 s, 299 MB/s 00:28:33.564 18:32:26 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:34.500 18:32:27 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:34.500 [2024-11-26 18:32:27.808500] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:28:34.500 [2024-11-26 18:32:27.808608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80142 ] 00:28:34.759 [2024-11-26 18:32:27.982894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.759 [2024-11-26 18:32:28.090416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.327 [2024-11-26 18:32:28.440976] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.328 [2024-11-26 18:32:28.441059] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.328 [2024-11-26 18:32:28.600706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.600783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:35.328 [2024-11-26 18:32:28.600797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:35.328 [2024-11-26 18:32:28.600820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.600887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.600901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.328 [2024-11-26 18:32:28.600909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:35.328 [2024-11-26 18:32:28.600917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.600936] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:35.328 [2024-11-26 18:32:28.601900] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:35.328 [2024-11-26 18:32:28.601928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.601936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.328 [2024-11-26 18:32:28.601944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:28:35.328 [2024-11-26 18:32:28.601951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.603426] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:35.328 [2024-11-26 18:32:28.623132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.623223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:35.328 [2024-11-26 18:32:28.623238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.743 ms 00:28:35.328 [2024-11-26 18:32:28.623246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.623396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.623408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:35.328 [2024-11-26 18:32:28.623416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:28:35.328 [2024-11-26 18:32:28.623422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.630956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.630991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:35.328 [2024-11-26 18:32:28.631018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.442 ms 00:28:35.328 [2024-11-26 18:32:28.631035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.631132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.631148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:35.328 [2024-11-26 18:32:28.631157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:35.328 [2024-11-26 18:32:28.631163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.631218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.631226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:35.328 [2024-11-26 18:32:28.631234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:35.328 [2024-11-26 18:32:28.631240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.631269] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:35.328 [2024-11-26 18:32:28.635905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.635942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:35.328 [2024-11-26 18:32:28.635958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:28:35.328 [2024-11-26 18:32:28.635965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.636000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.636007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:35.328 [2024-11-26 18:32:28.636015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:35.328 [2024-11-26 18:32:28.636021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.636088] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:35.328 [2024-11-26 18:32:28.636113] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:35.328 [2024-11-26 18:32:28.636145] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:35.328 [2024-11-26 18:32:28.636164] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:35.328 [2024-11-26 18:32:28.636248] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:35.328 [2024-11-26 18:32:28.636262] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:35.328 [2024-11-26 18:32:28.636272] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:35.328 [2024-11-26 18:32:28.636281] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:35.328 [2024-11-26 18:32:28.636288] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:35.328 [2024-11-26 18:32:28.636315] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:35.328 [2024-11-26 18:32:28.636322] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:35.328 [2024-11-26 18:32:28.636335] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:35.328 [2024-11-26 18:32:28.636342] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:35.328 [2024-11-26 18:32:28.636349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.636357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:35.328 [2024-11-26 18:32:28.636365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:28:35.328 [2024-11-26 18:32:28.636372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.636444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.328 [2024-11-26 18:32:28.636465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:35.328 [2024-11-26 18:32:28.636472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:35.328 [2024-11-26 18:32:28.636480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.328 [2024-11-26 18:32:28.636577] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:35.328 [2024-11-26 18:32:28.636600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:35.328 [2024-11-26 18:32:28.636607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:35.328 [2024-11-26 18:32:28.636628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.328 [2024-11-26 18:32:28.636636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:35.328 [2024-11-26 18:32:28.636643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:35.328 [2024-11-26 18:32:28.636649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:35.328 [2024-11-26 18:32:28.636656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:35.328 [2024-11-26 18:32:28.636663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:35.328 [2024-11-26 18:32:28.636669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:35.328 [2024-11-26 18:32:28.636675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:35.328 [2024-11-26 18:32:28.636681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:35.328 [2024-11-26 18:32:28.636689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:35.328 [2024-11-26 18:32:28.636707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:35.328 [2024-11-26 18:32:28.636714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:35.328 [2024-11-26 18:32:28.636721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.328 [2024-11-26 18:32:28.636727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:35.328 [2024-11-26 18:32:28.636734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:35.328 [2024-11-26 18:32:28.636740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.328 [2024-11-26 18:32:28.636746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:35.328 [2024-11-26 18:32:28.636753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:35.328 [2024-11-26 18:32:28.636760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.328 [2024-11-26 18:32:28.636766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:35.328 [2024-11-26 18:32:28.636772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:35.328 [2024-11-26 18:32:28.636784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.328 [2024-11-26 18:32:28.636791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:35.328 [2024-11-26 18:32:28.636797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:35.328 [2024-11-26 18:32:28.636803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.328 [2024-11-26 18:32:28.636809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:35.328 [2024-11-26 18:32:28.636816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:35.328 [2024-11-26 18:32:28.636821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.328 [2024-11-26 18:32:28.636827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:35.328 [2024-11-26 18:32:28.636834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:35.328 [2024-11-26 18:32:28.636840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:35.328 [2024-11-26 18:32:28.636846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:35.328 [2024-11-26 18:32:28.636852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:35.328 [2024-11-26 18:32:28.636859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:35.329 [2024-11-26 18:32:28.636865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:35.329 [2024-11-26 18:32:28.636871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:35.329 [2024-11-26 18:32:28.636878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.329 [2024-11-26 18:32:28.636884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:35.329 [2024-11-26 18:32:28.636890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:35.329 [2024-11-26 18:32:28.636896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.329 [2024-11-26 18:32:28.636902] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:35.329 [2024-11-26 18:32:28.636909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:35.329 [2024-11-26 18:32:28.636915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:35.329 [2024-11-26 18:32:28.636921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.329 [2024-11-26 18:32:28.636929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:35.329 [2024-11-26 18:32:28.636935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:35.329 [2024-11-26 18:32:28.636941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:35.329 [2024-11-26 18:32:28.636947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:35.329 [2024-11-26 18:32:28.636952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:35.329 [2024-11-26 18:32:28.636959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:35.329 [2024-11-26 18:32:28.636966] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:35.329 [2024-11-26 18:32:28.636974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.329 [2024-11-26 18:32:28.636988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:35.329 [2024-11-26 18:32:28.636995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:35.329 [2024-11-26 18:32:28.637002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:35.329 [2024-11-26 18:32:28.637009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:35.329 [2024-11-26 18:32:28.637015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:35.329 [2024-11-26 18:32:28.637022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:35.329 [2024-11-26 18:32:28.637028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:35.329 [2024-11-26 18:32:28.637035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:35.329 [2024-11-26 18:32:28.637042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:35.329 [2024-11-26 18:32:28.637048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:35.329 [2024-11-26 18:32:28.637055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:35.329 [2024-11-26 18:32:28.637061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:35.329 [2024-11-26 18:32:28.637068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:35.329 [2024-11-26 18:32:28.637074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:35.329 [2024-11-26 18:32:28.637080] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:35.329 [2024-11-26 18:32:28.637088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.329 [2024-11-26 18:32:28.637095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:35.329 [2024-11-26 18:32:28.637102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:35.329 [2024-11-26 18:32:28.637108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:35.329 [2024-11-26 18:32:28.637114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:35.329 [2024-11-26 18:32:28.637122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.329 [2024-11-26 18:32:28.637129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:35.329 [2024-11-26 18:32:28.637137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:28:35.329 [2024-11-26 18:32:28.637143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.676371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.676423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.589 [2024-11-26 18:32:28.676435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.249 ms 00:28:35.589 [2024-11-26 18:32:28.676446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.676546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.676555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:35.589 [2024-11-26 18:32:28.676563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:35.589 [2024-11-26 18:32:28.676570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.732934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.732985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.589 [2024-11-26 18:32:28.732997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.383 ms 00:28:35.589 [2024-11-26 18:32:28.733005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.733062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.733070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.589 [2024-11-26 18:32:28.733081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:35.589 [2024-11-26 18:32:28.733088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.733557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.733575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.589 [2024-11-26 18:32:28.733583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:28:35.589 [2024-11-26 18:32:28.733589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.733706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.733722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.589 [2024-11-26 18:32:28.733735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:28:35.589 [2024-11-26 18:32:28.733741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.749344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.749385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.589 [2024-11-26 18:32:28.749412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.611 ms 00:28:35.589 [2024-11-26 18:32:28.749419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.766932] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:35.589 [2024-11-26 18:32:28.766970] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:35.589 [2024-11-26 18:32:28.766998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.767006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:35.589 [2024-11-26 18:32:28.767015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.502 ms 00:28:35.589 [2024-11-26 18:32:28.767022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.794637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.794686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:35.589 [2024-11-26 18:32:28.794696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.628 ms 00:28:35.589 [2024-11-26 18:32:28.794703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.812607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.812647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:35.589 [2024-11-26 18:32:28.812657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.884 ms 00:28:35.589 [2024-11-26 18:32:28.812663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.830051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.830086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:35.589 [2024-11-26 18:32:28.830095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.386 ms 00:28:35.589 [2024-11-26 18:32:28.830102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.830806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.830833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:35.589 [2024-11-26 18:32:28.830842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:28:35.589 [2024-11-26 18:32:28.830855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.589 [2024-11-26 18:32:28.913709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.589 [2024-11-26 18:32:28.913771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:35.589 [2024-11-26 18:32:28.913784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.994 ms 00:28:35.589 [2024-11-26 18:32:28.913797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.849 [2024-11-26 18:32:28.924320] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:35.849 [2024-11-26 18:32:28.927202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.849 [2024-11-26 18:32:28.927232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:35.849 [2024-11-26 18:32:28.927243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.366 ms 00:28:35.849 [2024-11-26 18:32:28.927251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.849 [2024-11-26 18:32:28.927352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.849 [2024-11-26 18:32:28.927362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:35.849 [2024-11-26 18:32:28.927371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:35.849 [2024-11-26 18:32:28.927379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.849 [2024-11-26 18:32:28.927447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.849 [2024-11-26 18:32:28.927457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:35.849 [2024-11-26 18:32:28.927465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:35.849 [2024-11-26 18:32:28.927471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.849 [2024-11-26 18:32:28.927488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.849 [2024-11-26 18:32:28.927496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:35.849 [2024-11-26 18:32:28.927503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:35.849 [2024-11-26 18:32:28.927509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.849 [2024-11-26 18:32:28.927538] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:35.849 [2024-11-26 18:32:28.927549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.849 [2024-11-26 18:32:28.927557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:35.849 [2024-11-26 18:32:28.927564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:35.849 [2024-11-26 18:32:28.927571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.849 [2024-11-26 18:32:28.962861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.849 [2024-11-26 18:32:28.962941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:35.849 [2024-11-26 18:32:28.962955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.340 ms 00:28:35.849 [2024-11-26 18:32:28.962971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.849 [2024-11-26 18:32:28.963071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.849 [2024-11-26 18:32:28.963081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:35.849 [2024-11-26 18:32:28.963089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:35.849 [2024-11-26 18:32:28.963097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.849 [2024-11-26 18:32:28.964334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 363.820 ms, result 0 00:28:36.782  [2024-11-26T18:32:31.053Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-26T18:32:31.989Z] Copying: 52/1024 [MB] (26 MBps) [2024-11-26T18:32:33.376Z] Copying: 79/1024 [MB] (27 MBps) [2024-11-26T18:32:34.321Z] Copying: 106/1024 [MB] (26 MBps) [2024-11-26T18:32:35.263Z] Copying: 131/1024 [MB] (25 MBps) [2024-11-26T18:32:36.204Z] Copying: 157/1024 [MB] (25 MBps) [2024-11-26T18:32:37.149Z] Copying: 183/1024 [MB] (25 MBps) [2024-11-26T18:32:38.087Z] Copying: 209/1024 [MB] (26 MBps) [2024-11-26T18:32:39.043Z] Copying: 235/1024 [MB] (26 MBps) [2024-11-26T18:32:39.981Z] Copying: 262/1024 [MB] (26 MBps) [2024-11-26T18:32:41.359Z] Copying: 289/1024 [MB] (26 MBps) [2024-11-26T18:32:42.297Z] Copying: 315/1024 [MB] (25 MBps) [2024-11-26T18:32:43.235Z] Copying: 340/1024 [MB] (25 MBps) [2024-11-26T18:32:44.175Z] Copying: 367/1024 [MB] (26 MBps) [2024-11-26T18:32:45.136Z] Copying: 393/1024 [MB] (26 MBps) [2024-11-26T18:32:46.131Z] Copying: 419/1024 [MB] (25 MBps) [2024-11-26T18:32:47.070Z] Copying: 445/1024 [MB] (25 MBps) [2024-11-26T18:32:48.008Z] Copying: 471/1024 [MB] (25 MBps) [2024-11-26T18:32:48.948Z] Copying: 496/1024 [MB] (25 MBps) [2024-11-26T18:32:50.328Z] Copying: 522/1024 [MB] (25 MBps) [2024-11-26T18:32:51.266Z] Copying: 548/1024 [MB] (25 MBps) [2024-11-26T18:32:52.205Z] Copying: 574/1024 [MB] (25 MBps) [2024-11-26T18:32:53.158Z] Copying: 601/1024 [MB] (26 MBps) [2024-11-26T18:32:54.097Z] Copying: 627/1024 [MB] (26 MBps) [2024-11-26T18:32:55.036Z] Copying: 654/1024 [MB] (26 MBps) [2024-11-26T18:32:55.977Z] Copying: 679/1024 [MB] (25 MBps) [2024-11-26T18:32:57.358Z] Copying: 705/1024 [MB] (26 MBps) [2024-11-26T18:32:57.926Z] Copying: 732/1024 [MB] (26 MBps) [2024-11-26T18:32:59.307Z] Copying: 758/1024 [MB] (25 MBps) [2024-11-26T18:33:00.246Z] Copying: 784/1024 [MB] (25 MBps) [2024-11-26T18:33:01.183Z] Copying: 810/1024 [MB] (25 MBps) [2024-11-26T18:33:02.122Z] Copying: 836/1024 [MB] (26 MBps) [2024-11-26T18:33:03.060Z] Copying: 863/1024 [MB] (26 MBps) [2024-11-26T18:33:03.999Z] Copying: 889/1024 [MB] (26 MBps) [2024-11-26T18:33:04.939Z] Copying: 916/1024 [MB] (26 MBps) [2024-11-26T18:33:06.320Z] Copying: 943/1024 [MB] (26 MBps) [2024-11-26T18:33:07.258Z] Copying: 969/1024 [MB] (25 MBps) [2024-11-26T18:33:08.196Z] Copying: 995/1024 [MB] (25 MBps) [2024-11-26T18:33:08.196Z] Copying: 1022/1024 [MB] (27 MBps) [2024-11-26T18:33:08.196Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-26 18:33:07.969497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:07.969600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:14.861 [2024-11-26 18:33:07.969678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:14.861 [2024-11-26 18:33:07.969706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:07.969809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:14.861 [2024-11-26 18:33:07.974094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:07.974160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:14.861 [2024-11-26 18:33:07.974198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.235 ms 00:29:14.861 [2024-11-26 18:33:07.974217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:07.976091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:07.976160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:14.861 [2024-11-26 18:33:07.976194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.844 ms 00:29:14.861 [2024-11-26 18:33:07.976216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:07.992867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:07.992939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:14.861 [2024-11-26 18:33:07.992967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.639 ms 00:29:14.861 [2024-11-26 18:33:07.992987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:07.997926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:07.997987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:14.861 [2024-11-26 18:33:07.998014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.899 ms 00:29:14.861 [2024-11-26 18:33:07.998033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:08.033385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:08.033462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:14.861 [2024-11-26 18:33:08.033493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.360 ms 00:29:14.861 [2024-11-26 18:33:08.033513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:08.053721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:08.053794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:14.861 [2024-11-26 18:33:08.053808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.202 ms 00:29:14.861 [2024-11-26 18:33:08.053816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:08.053936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:08.053952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:14.861 [2024-11-26 18:33:08.053960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:29:14.861 [2024-11-26 18:33:08.053968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:08.089594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:08.089632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:14.861 [2024-11-26 18:33:08.089643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.682 ms 00:29:14.861 [2024-11-26 18:33:08.089651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:08.123755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:08.123790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:14.861 [2024-11-26 18:33:08.123800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.138 ms 00:29:14.861 [2024-11-26 18:33:08.123807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:08.157730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:08.157771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:14.861 [2024-11-26 18:33:08.157785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.957 ms 00:29:14.861 [2024-11-26 18:33:08.157793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:08.191519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.861 [2024-11-26 18:33:08.191553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:14.861 [2024-11-26 18:33:08.191562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.729 ms 00:29:14.861 [2024-11-26 18:33:08.191569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.861 [2024-11-26 18:33:08.191599] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:14.861 [2024-11-26 18:33:08.191612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:14.861 [2024-11-26 18:33:08.191636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:14.861 [2024-11-26 18:33:08.191643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:14.861 [2024-11-26 18:33:08.191651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:14.861 [2024-11-26 18:33:08.191658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:14.861 [2024-11-26 18:33:08.191665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.191998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:14.862 [2024-11-26 18:33:08.192162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:14.863 [2024-11-26 18:33:08.192336] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:14.863 [2024-11-26 18:33:08.192345] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ed23b79-c687-4115-982f-0da7fc18820e 00:29:14.863 [2024-11-26 18:33:08.192353] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:14.863 [2024-11-26 18:33:08.192360] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:14.863 [2024-11-26 18:33:08.192367] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:14.863 [2024-11-26 18:33:08.192374] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:14.863 [2024-11-26 18:33:08.192381] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:14.863 [2024-11-26 18:33:08.192398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:14.863 [2024-11-26 18:33:08.192405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:14.863 [2024-11-26 18:33:08.192411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:14.863 [2024-11-26 18:33:08.192417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:14.863 [2024-11-26 18:33:08.192429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.863 [2024-11-26 18:33:08.192436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:14.863 [2024-11-26 18:33:08.192444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:29:14.863 [2024-11-26 18:33:08.192450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.123 [2024-11-26 18:33:08.211968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.123 [2024-11-26 18:33:08.212001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:15.123 [2024-11-26 18:33:08.212011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.495 ms 00:29:15.123 [2024-11-26 18:33:08.212018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.123 [2024-11-26 18:33:08.212536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.123 [2024-11-26 18:33:08.212550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:15.123 [2024-11-26 18:33:08.212559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:29:15.123 [2024-11-26 18:33:08.212571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.123 [2024-11-26 18:33:08.262806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.123 [2024-11-26 18:33:08.262845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:15.123 [2024-11-26 18:33:08.262855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.123 [2024-11-26 18:33:08.262879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.123 [2024-11-26 18:33:08.262931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.123 [2024-11-26 18:33:08.262940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:15.123 [2024-11-26 18:33:08.262947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.123 [2024-11-26 18:33:08.262959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.123 [2024-11-26 18:33:08.263011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.123 [2024-11-26 18:33:08.263022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:15.123 [2024-11-26 18:33:08.263030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.123 [2024-11-26 18:33:08.263037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.123 [2024-11-26 18:33:08.263052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.123 [2024-11-26 18:33:08.263060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:15.123 [2024-11-26 18:33:08.263067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.123 [2024-11-26 18:33:08.263074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.123 [2024-11-26 18:33:08.385964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.123 [2024-11-26 18:33:08.386039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:15.123 [2024-11-26 18:33:08.386067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.123 [2024-11-26 18:33:08.386076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.383 [2024-11-26 18:33:08.487124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.383 [2024-11-26 18:33:08.487181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:15.383 [2024-11-26 18:33:08.487193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.383 [2024-11-26 18:33:08.487207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.383 [2024-11-26 18:33:08.487289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.383 [2024-11-26 18:33:08.487298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:15.383 [2024-11-26 18:33:08.487307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.383 [2024-11-26 18:33:08.487314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.383 [2024-11-26 18:33:08.487349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.383 [2024-11-26 18:33:08.487359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:15.384 [2024-11-26 18:33:08.487367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.384 [2024-11-26 18:33:08.487374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.384 [2024-11-26 18:33:08.487484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.384 [2024-11-26 18:33:08.487496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:15.384 [2024-11-26 18:33:08.487504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.384 [2024-11-26 18:33:08.487511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.384 [2024-11-26 18:33:08.487544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.384 [2024-11-26 18:33:08.487554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:15.384 [2024-11-26 18:33:08.487562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.384 [2024-11-26 18:33:08.487569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.384 [2024-11-26 18:33:08.487604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.384 [2024-11-26 18:33:08.487637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:15.384 [2024-11-26 18:33:08.487645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.384 [2024-11-26 18:33:08.487652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.384 [2024-11-26 18:33:08.487690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.384 [2024-11-26 18:33:08.487699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:15.384 [2024-11-26 18:33:08.487706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.384 [2024-11-26 18:33:08.487713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.384 [2024-11-26 18:33:08.487825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 519.300 ms, result 0 00:29:17.293 00:29:17.293 00:29:17.293 18:33:10 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:17.293 [2024-11-26 18:33:10.224594] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:29:17.293 [2024-11-26 18:33:10.224736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80573 ] 00:29:17.293 [2024-11-26 18:33:10.403594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.293 [2024-11-26 18:33:10.509887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.552 [2024-11-26 18:33:10.844120] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:17.552 [2024-11-26 18:33:10.844181] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:17.812 [2024-11-26 18:33:10.999230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:10.999287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:17.812 [2024-11-26 18:33:10.999300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:17.812 [2024-11-26 18:33:10.999308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:10.999349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:10.999360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:17.812 [2024-11-26 18:33:10.999367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:17.812 [2024-11-26 18:33:10.999375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:10.999392] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:17.812 [2024-11-26 18:33:11.000313] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:17.812 [2024-11-26 18:33:11.000338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:11.000346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:17.812 [2024-11-26 18:33:11.000354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:29:17.812 [2024-11-26 18:33:11.000362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:11.001772] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:17.812 [2024-11-26 18:33:11.020320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:11.020357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:17.812 [2024-11-26 18:33:11.020368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.600 ms 00:29:17.812 [2024-11-26 18:33:11.020376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:11.020435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:11.020444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:17.812 [2024-11-26 18:33:11.020452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:17.812 [2024-11-26 18:33:11.020459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:11.027014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:11.027044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:17.812 [2024-11-26 18:33:11.027054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.510 ms 00:29:17.812 [2024-11-26 18:33:11.027065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:11.027135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:11.027148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:17.812 [2024-11-26 18:33:11.027156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:17.812 [2024-11-26 18:33:11.027163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:11.027200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:11.027209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:17.812 [2024-11-26 18:33:11.027218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:17.812 [2024-11-26 18:33:11.027225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:11.027250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:17.812 [2024-11-26 18:33:11.031899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:11.031947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:17.812 [2024-11-26 18:33:11.031960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.664 ms 00:29:17.812 [2024-11-26 18:33:11.031968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:11.031997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:11.032006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:17.812 [2024-11-26 18:33:11.032015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:17.812 [2024-11-26 18:33:11.032022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:11.032065] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:17.812 [2024-11-26 18:33:11.032087] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:17.812 [2024-11-26 18:33:11.032121] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:17.812 [2024-11-26 18:33:11.032139] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:17.812 [2024-11-26 18:33:11.032230] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:17.812 [2024-11-26 18:33:11.032255] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:17.812 [2024-11-26 18:33:11.032265] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:17.812 [2024-11-26 18:33:11.032276] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:17.812 [2024-11-26 18:33:11.032285] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:17.812 [2024-11-26 18:33:11.032294] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:17.812 [2024-11-26 18:33:11.032301] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:17.812 [2024-11-26 18:33:11.032311] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:17.812 [2024-11-26 18:33:11.032319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:17.812 [2024-11-26 18:33:11.032326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:11.032333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:17.812 [2024-11-26 18:33:11.032341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:29:17.812 [2024-11-26 18:33:11.032348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:11.032417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.812 [2024-11-26 18:33:11.032426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:17.812 [2024-11-26 18:33:11.032434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:17.812 [2024-11-26 18:33:11.032441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.812 [2024-11-26 18:33:11.032535] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:17.812 [2024-11-26 18:33:11.032550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:17.812 [2024-11-26 18:33:11.032559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:17.812 [2024-11-26 18:33:11.032567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.812 [2024-11-26 18:33:11.032575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:17.812 [2024-11-26 18:33:11.032581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:17.813 [2024-11-26 18:33:11.032595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:17.813 [2024-11-26 18:33:11.032601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:17.813 [2024-11-26 18:33:11.032632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:17.813 [2024-11-26 18:33:11.032640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:17.813 [2024-11-26 18:33:11.032647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:17.813 [2024-11-26 18:33:11.032664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:17.813 [2024-11-26 18:33:11.032671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:17.813 [2024-11-26 18:33:11.032678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:17.813 [2024-11-26 18:33:11.032692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:17.813 [2024-11-26 18:33:11.032699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:17.813 [2024-11-26 18:33:11.032714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.813 [2024-11-26 18:33:11.032727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:17.813 [2024-11-26 18:33:11.032733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.813 [2024-11-26 18:33:11.032745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:17.813 [2024-11-26 18:33:11.032751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.813 [2024-11-26 18:33:11.032763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:17.813 [2024-11-26 18:33:11.032770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:17.813 [2024-11-26 18:33:11.032783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:17.813 [2024-11-26 18:33:11.032796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:17.813 [2024-11-26 18:33:11.032809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:17.813 [2024-11-26 18:33:11.032815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:17.813 [2024-11-26 18:33:11.032822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:17.813 [2024-11-26 18:33:11.032828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:17.813 [2024-11-26 18:33:11.032835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:17.813 [2024-11-26 18:33:11.032841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:17.813 [2024-11-26 18:33:11.032856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:17.813 [2024-11-26 18:33:11.032863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032869] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:17.813 [2024-11-26 18:33:11.032877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:17.813 [2024-11-26 18:33:11.032884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:17.813 [2024-11-26 18:33:11.032892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:17.813 [2024-11-26 18:33:11.032900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:17.813 [2024-11-26 18:33:11.032907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:17.813 [2024-11-26 18:33:11.032914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:17.813 [2024-11-26 18:33:11.032921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:17.813 [2024-11-26 18:33:11.032927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:17.813 [2024-11-26 18:33:11.032934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:17.813 [2024-11-26 18:33:11.032943] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:17.813 [2024-11-26 18:33:11.032952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:17.813 [2024-11-26 18:33:11.032964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:17.813 [2024-11-26 18:33:11.032971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:17.813 [2024-11-26 18:33:11.032979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:17.813 [2024-11-26 18:33:11.032986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:17.813 [2024-11-26 18:33:11.032994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:17.813 [2024-11-26 18:33:11.033002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:17.813 [2024-11-26 18:33:11.033009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:17.813 [2024-11-26 18:33:11.033016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:17.813 [2024-11-26 18:33:11.033024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:17.813 [2024-11-26 18:33:11.033031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:17.813 [2024-11-26 18:33:11.033038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:17.813 [2024-11-26 18:33:11.033046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:17.813 [2024-11-26 18:33:11.033053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:17.813 [2024-11-26 18:33:11.033061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:17.813 [2024-11-26 18:33:11.033068] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:17.813 [2024-11-26 18:33:11.033078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:17.813 [2024-11-26 18:33:11.033086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:17.813 [2024-11-26 18:33:11.033094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:17.813 [2024-11-26 18:33:11.033102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:17.813 [2024-11-26 18:33:11.033110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:17.813 [2024-11-26 18:33:11.033119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.813 [2024-11-26 18:33:11.033127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:17.813 [2024-11-26 18:33:11.033135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:29:17.813 [2024-11-26 18:33:11.033142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.813 [2024-11-26 18:33:11.068691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.813 [2024-11-26 18:33:11.068729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:17.813 [2024-11-26 18:33:11.068741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.569 ms 00:29:17.813 [2024-11-26 18:33:11.068768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.813 [2024-11-26 18:33:11.068857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.813 [2024-11-26 18:33:11.068867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:17.813 [2024-11-26 18:33:11.068875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:17.813 [2024-11-26 18:33:11.068882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.813 [2024-11-26 18:33:11.123502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.813 [2024-11-26 18:33:11.123540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:17.813 [2024-11-26 18:33:11.123552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.661 ms 00:29:17.813 [2024-11-26 18:33:11.123560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.813 [2024-11-26 18:33:11.123596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.813 [2024-11-26 18:33:11.123605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:17.813 [2024-11-26 18:33:11.123626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:17.813 [2024-11-26 18:33:11.123634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.813 [2024-11-26 18:33:11.124096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.813 [2024-11-26 18:33:11.124203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:17.813 [2024-11-26 18:33:11.124215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:29:17.813 [2024-11-26 18:33:11.124222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.813 [2024-11-26 18:33:11.124335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.813 [2024-11-26 18:33:11.124348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:17.813 [2024-11-26 18:33:11.124362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:29:17.813 [2024-11-26 18:33:11.124369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.813 [2024-11-26 18:33:11.140504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.813 [2024-11-26 18:33:11.140540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:17.814 [2024-11-26 18:33:11.140551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.147 ms 00:29:17.814 [2024-11-26 18:33:11.140558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.073 [2024-11-26 18:33:11.159320] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:18.073 [2024-11-26 18:33:11.159417] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:18.073 [2024-11-26 18:33:11.159432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.073 [2024-11-26 18:33:11.159440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:18.073 [2024-11-26 18:33:11.159448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.798 ms 00:29:18.073 [2024-11-26 18:33:11.159456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.073 [2024-11-26 18:33:11.188509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.073 [2024-11-26 18:33:11.188587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:18.073 [2024-11-26 18:33:11.188601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.070 ms 00:29:18.073 [2024-11-26 18:33:11.188624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.073 [2024-11-26 18:33:11.206523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.073 [2024-11-26 18:33:11.206594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:18.073 [2024-11-26 18:33:11.206623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.865 ms 00:29:18.073 [2024-11-26 18:33:11.206645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.073 [2024-11-26 18:33:11.223990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.073 [2024-11-26 18:33:11.224023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:18.073 [2024-11-26 18:33:11.224033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.342 ms 00:29:18.073 [2024-11-26 18:33:11.224040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.073 [2024-11-26 18:33:11.224838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.073 [2024-11-26 18:33:11.224862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:18.073 [2024-11-26 18:33:11.224874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:29:18.073 [2024-11-26 18:33:11.224881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.073 [2024-11-26 18:33:11.308395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.073 [2024-11-26 18:33:11.308459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:18.073 [2024-11-26 18:33:11.308478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.655 ms 00:29:18.073 [2024-11-26 18:33:11.308485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.073 [2024-11-26 18:33:11.318891] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:18.073 [2024-11-26 18:33:11.321647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.073 [2024-11-26 18:33:11.321675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:18.073 [2024-11-26 18:33:11.321686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.136 ms 00:29:18.073 [2024-11-26 18:33:11.321693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.073 [2024-11-26 18:33:11.321770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.073 [2024-11-26 18:33:11.321780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:18.073 [2024-11-26 18:33:11.321792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:18.073 [2024-11-26 18:33:11.321799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.073 [2024-11-26 18:33:11.321864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.073 [2024-11-26 18:33:11.321875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:18.073 [2024-11-26 18:33:11.321882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:18.073 [2024-11-26 18:33:11.321889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.073 [2024-11-26 18:33:11.321906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.073 [2024-11-26 18:33:11.321914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:18.073 [2024-11-26 18:33:11.321921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:18.074 [2024-11-26 18:33:11.321927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.074 [2024-11-26 18:33:11.321958] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:18.074 [2024-11-26 18:33:11.321968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.074 [2024-11-26 18:33:11.321974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:18.074 [2024-11-26 18:33:11.321981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:18.074 [2024-11-26 18:33:11.321988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.074 [2024-11-26 18:33:11.357486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.074 [2024-11-26 18:33:11.357562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:18.074 [2024-11-26 18:33:11.357598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.550 ms 00:29:18.074 [2024-11-26 18:33:11.357624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.074 [2024-11-26 18:33:11.357705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.074 [2024-11-26 18:33:11.357730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:18.074 [2024-11-26 18:33:11.357749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:18.074 [2024-11-26 18:33:11.357768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.074 [2024-11-26 18:33:11.358977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 359.927 ms, result 0 00:29:19.464  [2024-11-26T18:33:13.736Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-26T18:33:14.675Z] Copying: 56/1024 [MB] (28 MBps) [2024-11-26T18:33:15.614Z] Copying: 83/1024 [MB] (27 MBps) [2024-11-26T18:33:16.550Z] Copying: 112/1024 [MB] (28 MBps) [2024-11-26T18:33:17.929Z] Copying: 140/1024 [MB] (27 MBps) [2024-11-26T18:33:18.870Z] Copying: 169/1024 [MB] (29 MBps) [2024-11-26T18:33:19.807Z] Copying: 198/1024 [MB] (29 MBps) [2024-11-26T18:33:20.747Z] Copying: 227/1024 [MB] (28 MBps) [2024-11-26T18:33:21.687Z] Copying: 255/1024 [MB] (28 MBps) [2024-11-26T18:33:22.627Z] Copying: 283/1024 [MB] (28 MBps) [2024-11-26T18:33:23.568Z] Copying: 311/1024 [MB] (27 MBps) [2024-11-26T18:33:24.508Z] Copying: 339/1024 [MB] (27 MBps) [2024-11-26T18:33:25.517Z] Copying: 367/1024 [MB] (28 MBps) [2024-11-26T18:33:26.895Z] Copying: 395/1024 [MB] (28 MBps) [2024-11-26T18:33:27.831Z] Copying: 423/1024 [MB] (27 MBps) [2024-11-26T18:33:28.770Z] Copying: 450/1024 [MB] (27 MBps) [2024-11-26T18:33:29.709Z] Copying: 480/1024 [MB] (29 MBps) [2024-11-26T18:33:30.647Z] Copying: 509/1024 [MB] (29 MBps) [2024-11-26T18:33:31.586Z] Copying: 538/1024 [MB] (29 MBps) [2024-11-26T18:33:32.527Z] Copying: 568/1024 [MB] (29 MBps) [2024-11-26T18:33:33.909Z] Copying: 596/1024 [MB] (28 MBps) [2024-11-26T18:33:34.478Z] Copying: 623/1024 [MB] (27 MBps) [2024-11-26T18:33:35.860Z] Copying: 651/1024 [MB] (27 MBps) [2024-11-26T18:33:36.796Z] Copying: 678/1024 [MB] (27 MBps) [2024-11-26T18:33:37.798Z] Copying: 706/1024 [MB] (27 MBps) [2024-11-26T18:33:38.762Z] Copying: 733/1024 [MB] (27 MBps) [2024-11-26T18:33:39.702Z] Copying: 761/1024 [MB] (27 MBps) [2024-11-26T18:33:40.642Z] Copying: 788/1024 [MB] (27 MBps) [2024-11-26T18:33:41.581Z] Copying: 816/1024 [MB] (27 MBps) [2024-11-26T18:33:42.520Z] Copying: 844/1024 [MB] (28 MBps) [2024-11-26T18:33:43.457Z] Copying: 873/1024 [MB] (28 MBps) [2024-11-26T18:33:44.842Z] Copying: 901/1024 [MB] (28 MBps) [2024-11-26T18:33:45.782Z] Copying: 930/1024 [MB] (28 MBps) [2024-11-26T18:33:46.722Z] Copying: 958/1024 [MB] (28 MBps) [2024-11-26T18:33:47.661Z] Copying: 988/1024 [MB] (29 MBps) [2024-11-26T18:33:47.921Z] Copying: 1017/1024 [MB] (29 MBps) [2024-11-26T18:33:48.859Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-26 18:33:48.752496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.524 [2024-11-26 18:33:48.752601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:55.524 [2024-11-26 18:33:48.752661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:55.524 [2024-11-26 18:33:48.752680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.524 [2024-11-26 18:33:48.752729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:55.524 [2024-11-26 18:33:48.761662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.524 [2024-11-26 18:33:48.761733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:55.524 [2024-11-26 18:33:48.761752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.916 ms 00:29:55.524 [2024-11-26 18:33:48.761764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.524 [2024-11-26 18:33:48.762096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.524 [2024-11-26 18:33:48.762113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:55.524 [2024-11-26 18:33:48.762127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:29:55.524 [2024-11-26 18:33:48.762139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.525 [2024-11-26 18:33:48.766866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.525 [2024-11-26 18:33:48.766915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:55.525 [2024-11-26 18:33:48.766930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:29:55.525 [2024-11-26 18:33:48.766951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.525 [2024-11-26 18:33:48.773035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.525 [2024-11-26 18:33:48.773078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:55.525 [2024-11-26 18:33:48.773090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.029 ms 00:29:55.525 [2024-11-26 18:33:48.773098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.525 [2024-11-26 18:33:48.813078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.525 [2024-11-26 18:33:48.813136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:55.525 [2024-11-26 18:33:48.813151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.970 ms 00:29:55.525 [2024-11-26 18:33:48.813159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.525 [2024-11-26 18:33:48.834916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.525 [2024-11-26 18:33:48.834972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:55.525 [2024-11-26 18:33:48.834985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.737 ms 00:29:55.525 [2024-11-26 18:33:48.835020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.525 [2024-11-26 18:33:48.835168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.525 [2024-11-26 18:33:48.835180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:55.525 [2024-11-26 18:33:48.835189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:29:55.525 [2024-11-26 18:33:48.835197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.785 [2024-11-26 18:33:48.872251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.785 [2024-11-26 18:33:48.872296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:55.785 [2024-11-26 18:33:48.872310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.109 ms 00:29:55.785 [2024-11-26 18:33:48.872317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.785 [2024-11-26 18:33:48.908591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.785 [2024-11-26 18:33:48.908681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:55.785 [2024-11-26 18:33:48.908697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.299 ms 00:29:55.785 [2024-11-26 18:33:48.908704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.785 [2024-11-26 18:33:48.944722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.785 [2024-11-26 18:33:48.944768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:55.785 [2024-11-26 18:33:48.944781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.011 ms 00:29:55.785 [2024-11-26 18:33:48.944788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.785 [2024-11-26 18:33:48.981920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.785 [2024-11-26 18:33:48.981973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:55.785 [2024-11-26 18:33:48.981987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.103 ms 00:29:55.785 [2024-11-26 18:33:48.981996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.785 [2024-11-26 18:33:48.982040] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:55.785 [2024-11-26 18:33:48.982068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:55.785 [2024-11-26 18:33:48.982462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:55.786 [2024-11-26 18:33:48.982961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:55.786 [2024-11-26 18:33:48.982969] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ed23b79-c687-4115-982f-0da7fc18820e 00:29:55.786 [2024-11-26 18:33:48.982979] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:55.786 [2024-11-26 18:33:48.982986] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:55.786 [2024-11-26 18:33:48.982995] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:55.786 [2024-11-26 18:33:48.983004] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:55.786 [2024-11-26 18:33:48.983026] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:55.786 [2024-11-26 18:33:48.983035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:55.786 [2024-11-26 18:33:48.983043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:55.786 [2024-11-26 18:33:48.983051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:55.786 [2024-11-26 18:33:48.983058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:55.786 [2024-11-26 18:33:48.983067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.786 [2024-11-26 18:33:48.983076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:55.786 [2024-11-26 18:33:48.983086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:29:55.786 [2024-11-26 18:33:48.983098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.786 [2024-11-26 18:33:49.006084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.786 [2024-11-26 18:33:49.006143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:55.786 [2024-11-26 18:33:49.006158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.987 ms 00:29:55.786 [2024-11-26 18:33:49.006168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.786 [2024-11-26 18:33:49.006857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.786 [2024-11-26 18:33:49.006868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:55.786 [2024-11-26 18:33:49.006887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:29:55.786 [2024-11-26 18:33:49.006895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.786 [2024-11-26 18:33:49.065330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.786 [2024-11-26 18:33:49.065393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:55.786 [2024-11-26 18:33:49.065407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.786 [2024-11-26 18:33:49.065416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.786 [2024-11-26 18:33:49.065493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.786 [2024-11-26 18:33:49.065502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:55.786 [2024-11-26 18:33:49.065518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.786 [2024-11-26 18:33:49.065526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.786 [2024-11-26 18:33:49.065602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.786 [2024-11-26 18:33:49.065631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:55.786 [2024-11-26 18:33:49.065641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.786 [2024-11-26 18:33:49.065650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.786 [2024-11-26 18:33:49.065669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:55.786 [2024-11-26 18:33:49.065678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:55.786 [2024-11-26 18:33:49.065687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:55.786 [2024-11-26 18:33:49.065699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.045 [2024-11-26 18:33:49.196456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.045 [2024-11-26 18:33:49.196518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:56.045 [2024-11-26 18:33:49.196532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.045 [2024-11-26 18:33:49.196540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.045 [2024-11-26 18:33:49.306010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.045 [2024-11-26 18:33:49.306074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:56.045 [2024-11-26 18:33:49.306096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.045 [2024-11-26 18:33:49.306105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.045 [2024-11-26 18:33:49.306190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.045 [2024-11-26 18:33:49.306201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:56.045 [2024-11-26 18:33:49.306209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.045 [2024-11-26 18:33:49.306216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.045 [2024-11-26 18:33:49.306256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.045 [2024-11-26 18:33:49.306265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:56.045 [2024-11-26 18:33:49.306273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.045 [2024-11-26 18:33:49.306280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.045 [2024-11-26 18:33:49.306396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.045 [2024-11-26 18:33:49.306408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:56.045 [2024-11-26 18:33:49.306416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.045 [2024-11-26 18:33:49.306423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.045 [2024-11-26 18:33:49.306456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.045 [2024-11-26 18:33:49.306466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:56.045 [2024-11-26 18:33:49.306474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.045 [2024-11-26 18:33:49.306481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.045 [2024-11-26 18:33:49.306520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.045 [2024-11-26 18:33:49.306529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:56.045 [2024-11-26 18:33:49.306536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.045 [2024-11-26 18:33:49.306543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.045 [2024-11-26 18:33:49.306583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.045 [2024-11-26 18:33:49.306592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:56.045 [2024-11-26 18:33:49.306600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.045 [2024-11-26 18:33:49.306608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.045 [2024-11-26 18:33:49.306746] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 555.311 ms, result 0 00:29:56.981 00:29:56.981 00:29:57.240 18:33:50 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:59.148 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:59.148 18:33:52 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:59.148 [2024-11-26 18:33:52.101480] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:29:59.148 [2024-11-26 18:33:52.101602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80988 ] 00:29:59.148 [2024-11-26 18:33:52.274106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.148 [2024-11-26 18:33:52.385140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.408 [2024-11-26 18:33:52.726812] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.408 [2024-11-26 18:33:52.726886] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:59.667 [2024-11-26 18:33:52.881920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.667 [2024-11-26 18:33:52.881991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:59.667 [2024-11-26 18:33:52.882005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:59.667 [2024-11-26 18:33:52.882013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.667 [2024-11-26 18:33:52.882060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.667 [2024-11-26 18:33:52.882073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:59.667 [2024-11-26 18:33:52.882081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:59.667 [2024-11-26 18:33:52.882089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.667 [2024-11-26 18:33:52.882106] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:59.667 [2024-11-26 18:33:52.883130] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:59.667 [2024-11-26 18:33:52.883151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.667 [2024-11-26 18:33:52.883159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:59.667 [2024-11-26 18:33:52.883168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:29:59.667 [2024-11-26 18:33:52.883175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.667 [2024-11-26 18:33:52.884601] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:59.667 [2024-11-26 18:33:52.903113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.667 [2024-11-26 18:33:52.903152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:59.667 [2024-11-26 18:33:52.903163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.563 ms 00:29:59.667 [2024-11-26 18:33:52.903189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.667 [2024-11-26 18:33:52.903251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.667 [2024-11-26 18:33:52.903261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:59.667 [2024-11-26 18:33:52.903282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:59.667 [2024-11-26 18:33:52.903290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.667 [2024-11-26 18:33:52.909985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.667 [2024-11-26 18:33:52.910016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:59.667 [2024-11-26 18:33:52.910025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.629 ms 00:29:59.667 [2024-11-26 18:33:52.910036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.667 [2024-11-26 18:33:52.910103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.667 [2024-11-26 18:33:52.910116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:59.667 [2024-11-26 18:33:52.910125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:59.667 [2024-11-26 18:33:52.910132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.667 [2024-11-26 18:33:52.910173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.667 [2024-11-26 18:33:52.910182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:59.667 [2024-11-26 18:33:52.910190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:59.667 [2024-11-26 18:33:52.910197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.667 [2024-11-26 18:33:52.910223] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:59.667 [2024-11-26 18:33:52.914759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.667 [2024-11-26 18:33:52.914784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:59.668 [2024-11-26 18:33:52.914796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.551 ms 00:29:59.668 [2024-11-26 18:33:52.914804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.668 [2024-11-26 18:33:52.914831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.668 [2024-11-26 18:33:52.914852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:59.668 [2024-11-26 18:33:52.914860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:59.668 [2024-11-26 18:33:52.914867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.668 [2024-11-26 18:33:52.914913] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:59.668 [2024-11-26 18:33:52.914932] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:59.668 [2024-11-26 18:33:52.914964] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:59.668 [2024-11-26 18:33:52.914980] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:59.668 [2024-11-26 18:33:52.915067] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:59.668 [2024-11-26 18:33:52.915078] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:59.668 [2024-11-26 18:33:52.915087] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:59.668 [2024-11-26 18:33:52.915097] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915105] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915112] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:59.668 [2024-11-26 18:33:52.915120] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:59.668 [2024-11-26 18:33:52.915129] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:59.668 [2024-11-26 18:33:52.915136] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:59.668 [2024-11-26 18:33:52.915144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.668 [2024-11-26 18:33:52.915152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:59.668 [2024-11-26 18:33:52.915159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:29:59.668 [2024-11-26 18:33:52.915167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.668 [2024-11-26 18:33:52.915232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.668 [2024-11-26 18:33:52.915240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:59.668 [2024-11-26 18:33:52.915247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:59.668 [2024-11-26 18:33:52.915255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.668 [2024-11-26 18:33:52.915345] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:59.668 [2024-11-26 18:33:52.915358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:59.668 [2024-11-26 18:33:52.915365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:59.668 [2024-11-26 18:33:52.915387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:59.668 [2024-11-26 18:33:52.915408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.668 [2024-11-26 18:33:52.915422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:59.668 [2024-11-26 18:33:52.915429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:59.668 [2024-11-26 18:33:52.915435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:59.668 [2024-11-26 18:33:52.915450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:59.668 [2024-11-26 18:33:52.915457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:59.668 [2024-11-26 18:33:52.915464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:59.668 [2024-11-26 18:33:52.915478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:59.668 [2024-11-26 18:33:52.915497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:59.668 [2024-11-26 18:33:52.915517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:59.668 [2024-11-26 18:33:52.915536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:59.668 [2024-11-26 18:33:52.915555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:59.668 [2024-11-26 18:33:52.915574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.668 [2024-11-26 18:33:52.915586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:59.668 [2024-11-26 18:33:52.915592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:59.668 [2024-11-26 18:33:52.915598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:59.668 [2024-11-26 18:33:52.915604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:59.668 [2024-11-26 18:33:52.915610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:59.668 [2024-11-26 18:33:52.915637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:59.668 [2024-11-26 18:33:52.915651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:59.668 [2024-11-26 18:33:52.915658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915665] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:59.668 [2024-11-26 18:33:52.915673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:59.668 [2024-11-26 18:33:52.915679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:59.668 [2024-11-26 18:33:52.915694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:59.668 [2024-11-26 18:33:52.915701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:59.668 [2024-11-26 18:33:52.915707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:59.668 [2024-11-26 18:33:52.915713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:59.668 [2024-11-26 18:33:52.915720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:59.668 [2024-11-26 18:33:52.915726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:59.668 [2024-11-26 18:33:52.915734] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:59.668 [2024-11-26 18:33:52.915743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.668 [2024-11-26 18:33:52.915754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:59.668 [2024-11-26 18:33:52.915761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:59.668 [2024-11-26 18:33:52.915767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:59.668 [2024-11-26 18:33:52.915774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:59.668 [2024-11-26 18:33:52.915781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:59.668 [2024-11-26 18:33:52.915787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:59.668 [2024-11-26 18:33:52.915794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:59.668 [2024-11-26 18:33:52.915801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:59.668 [2024-11-26 18:33:52.915807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:59.668 [2024-11-26 18:33:52.915814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:59.668 [2024-11-26 18:33:52.915821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:59.668 [2024-11-26 18:33:52.915828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:59.668 [2024-11-26 18:33:52.915835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:59.668 [2024-11-26 18:33:52.915842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:59.668 [2024-11-26 18:33:52.915849] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:59.668 [2024-11-26 18:33:52.915857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.668 [2024-11-26 18:33:52.915865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:59.668 [2024-11-26 18:33:52.915872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:59.668 [2024-11-26 18:33:52.915880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:59.668 [2024-11-26 18:33:52.915888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:59.668 [2024-11-26 18:33:52.915896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.668 [2024-11-26 18:33:52.915903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:59.668 [2024-11-26 18:33:52.915923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:29:59.668 [2024-11-26 18:33:52.915931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.668 [2024-11-26 18:33:52.953054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.668 [2024-11-26 18:33:52.953097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:59.668 [2024-11-26 18:33:52.953110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.145 ms 00:29:59.668 [2024-11-26 18:33:52.953122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.668 [2024-11-26 18:33:52.953207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.668 [2024-11-26 18:33:52.953216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:59.668 [2024-11-26 18:33:52.953224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:59.668 [2024-11-26 18:33:52.953231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.011348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.011390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:59.928 [2024-11-26 18:33:53.011402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.167 ms 00:29:59.928 [2024-11-26 18:33:53.011409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.011448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.011456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:59.928 [2024-11-26 18:33:53.011467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:59.928 [2024-11-26 18:33:53.011474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.011955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.011968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:59.928 [2024-11-26 18:33:53.011976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:29:59.928 [2024-11-26 18:33:53.011983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.012087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.012099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:59.928 [2024-11-26 18:33:53.012112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:59.928 [2024-11-26 18:33:53.012119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.030768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.030809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:59.928 [2024-11-26 18:33:53.030821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.663 ms 00:29:59.928 [2024-11-26 18:33:53.030829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.049604] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:59.928 [2024-11-26 18:33:53.049646] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:59.928 [2024-11-26 18:33:53.049659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.049667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:59.928 [2024-11-26 18:33:53.049676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.767 ms 00:29:59.928 [2024-11-26 18:33:53.049683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.078034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.078087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:59.928 [2024-11-26 18:33:53.078099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.366 ms 00:29:59.928 [2024-11-26 18:33:53.078106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.095559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.095593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:59.928 [2024-11-26 18:33:53.095603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.436 ms 00:29:59.928 [2024-11-26 18:33:53.095609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.113068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.113144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:59.928 [2024-11-26 18:33:53.113157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.450 ms 00:29:59.928 [2024-11-26 18:33:53.113180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.113971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.113988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:59.928 [2024-11-26 18:33:53.113999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:29:59.928 [2024-11-26 18:33:53.114007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.197237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.197298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:59.928 [2024-11-26 18:33:53.197333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.372 ms 00:29:59.928 [2024-11-26 18:33:53.197342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.207811] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:59.928 [2024-11-26 18:33:53.210407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.210437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:59.928 [2024-11-26 18:33:53.210448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.035 ms 00:29:59.928 [2024-11-26 18:33:53.210471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.210550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.210561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:59.928 [2024-11-26 18:33:53.210573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:59.928 [2024-11-26 18:33:53.210580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.210658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.210669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:59.928 [2024-11-26 18:33:53.210677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:59.928 [2024-11-26 18:33:53.210684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.210702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.210711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:59.928 [2024-11-26 18:33:53.210718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:59.928 [2024-11-26 18:33:53.210726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.210760] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:59.928 [2024-11-26 18:33:53.210770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.210804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:59.928 [2024-11-26 18:33:53.210812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:59.928 [2024-11-26 18:33:53.210831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.246557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.246594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:59.928 [2024-11-26 18:33:53.246626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.778 ms 00:29:59.928 [2024-11-26 18:33:53.246645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.246714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:59.928 [2024-11-26 18:33:53.246724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:59.928 [2024-11-26 18:33:53.246732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:59.928 [2024-11-26 18:33:53.246740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:59.928 [2024-11-26 18:33:53.248042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 366.351 ms, result 0 00:30:01.309  [2024-11-26T18:33:55.583Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-26T18:33:56.520Z] Copying: 55/1024 [MB] (28 MBps) [2024-11-26T18:33:57.459Z] Copying: 83/1024 [MB] (27 MBps) [2024-11-26T18:33:58.398Z] Copying: 110/1024 [MB] (26 MBps) [2024-11-26T18:33:59.336Z] Copying: 136/1024 [MB] (26 MBps) [2024-11-26T18:34:00.275Z] Copying: 163/1024 [MB] (26 MBps) [2024-11-26T18:34:01.655Z] Copying: 189/1024 [MB] (26 MBps) [2024-11-26T18:34:02.592Z] Copying: 215/1024 [MB] (26 MBps) [2024-11-26T18:34:03.531Z] Copying: 242/1024 [MB] (26 MBps) [2024-11-26T18:34:04.533Z] Copying: 268/1024 [MB] (26 MBps) [2024-11-26T18:34:05.473Z] Copying: 294/1024 [MB] (26 MBps) [2024-11-26T18:34:06.413Z] Copying: 321/1024 [MB] (26 MBps) [2024-11-26T18:34:07.352Z] Copying: 348/1024 [MB] (27 MBps) [2024-11-26T18:34:08.292Z] Copying: 375/1024 [MB] (27 MBps) [2024-11-26T18:34:09.231Z] Copying: 402/1024 [MB] (26 MBps) [2024-11-26T18:34:10.611Z] Copying: 428/1024 [MB] (26 MBps) [2024-11-26T18:34:11.550Z] Copying: 454/1024 [MB] (26 MBps) [2024-11-26T18:34:12.487Z] Copying: 481/1024 [MB] (26 MBps) [2024-11-26T18:34:13.424Z] Copying: 508/1024 [MB] (26 MBps) [2024-11-26T18:34:14.362Z] Copying: 534/1024 [MB] (26 MBps) [2024-11-26T18:34:15.471Z] Copying: 561/1024 [MB] (27 MBps) [2024-11-26T18:34:16.408Z] Copying: 588/1024 [MB] (26 MBps) [2024-11-26T18:34:17.344Z] Copying: 615/1024 [MB] (27 MBps) [2024-11-26T18:34:18.281Z] Copying: 642/1024 [MB] (26 MBps) [2024-11-26T18:34:19.219Z] Copying: 668/1024 [MB] (25 MBps) [2024-11-26T18:34:20.599Z] Copying: 694/1024 [MB] (26 MBps) [2024-11-26T18:34:21.535Z] Copying: 720/1024 [MB] (25 MBps) [2024-11-26T18:34:22.472Z] Copying: 746/1024 [MB] (26 MBps) [2024-11-26T18:34:23.409Z] Copying: 773/1024 [MB] (26 MBps) [2024-11-26T18:34:24.346Z] Copying: 800/1024 [MB] (26 MBps) [2024-11-26T18:34:25.304Z] Copying: 826/1024 [MB] (26 MBps) [2024-11-26T18:34:26.241Z] Copying: 853/1024 [MB] (27 MBps) [2024-11-26T18:34:27.620Z] Copying: 880/1024 [MB] (27 MBps) [2024-11-26T18:34:28.558Z] Copying: 907/1024 [MB] (26 MBps) [2024-11-26T18:34:29.500Z] Copying: 934/1024 [MB] (26 MBps) [2024-11-26T18:34:30.440Z] Copying: 961/1024 [MB] (26 MBps) [2024-11-26T18:34:31.377Z] Copying: 987/1024 [MB] (26 MBps) [2024-11-26T18:34:32.315Z] Copying: 1014/1024 [MB] (26 MBps) [2024-11-26T18:34:32.575Z] Copying: 1048388/1048576 [kB] (9944 kBps) [2024-11-26T18:34:32.575Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-26 18:34:32.358691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.240 [2024-11-26 18:34:32.358757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:39.240 [2024-11-26 18:34:32.358782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:39.240 [2024-11-26 18:34:32.358790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.240 [2024-11-26 18:34:32.359927] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:39.240 [2024-11-26 18:34:32.364368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.240 [2024-11-26 18:34:32.364411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:39.240 [2024-11-26 18:34:32.364422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.415 ms 00:30:39.240 [2024-11-26 18:34:32.364430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.241 [2024-11-26 18:34:32.374314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.241 [2024-11-26 18:34:32.374354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:39.241 [2024-11-26 18:34:32.374365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.959 ms 00:30:39.241 [2024-11-26 18:34:32.374379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.241 [2024-11-26 18:34:32.397904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.241 [2024-11-26 18:34:32.398054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:39.241 [2024-11-26 18:34:32.398074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.555 ms 00:30:39.241 [2024-11-26 18:34:32.398086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.241 [2024-11-26 18:34:32.403083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.241 [2024-11-26 18:34:32.403116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:39.241 [2024-11-26 18:34:32.403125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.969 ms 00:30:39.241 [2024-11-26 18:34:32.403139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.241 [2024-11-26 18:34:32.438451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.241 [2024-11-26 18:34:32.438484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:39.241 [2024-11-26 18:34:32.438495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.340 ms 00:30:39.241 [2024-11-26 18:34:32.438502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.241 [2024-11-26 18:34:32.458409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.241 [2024-11-26 18:34:32.458518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:39.241 [2024-11-26 18:34:32.458533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.914 ms 00:30:39.241 [2024-11-26 18:34:32.458541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.241 [2024-11-26 18:34:32.569456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.241 [2024-11-26 18:34:32.569588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:39.241 [2024-11-26 18:34:32.569607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.094 ms 00:30:39.241 [2024-11-26 18:34:32.569640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.502 [2024-11-26 18:34:32.605320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.502 [2024-11-26 18:34:32.605365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:39.502 [2024-11-26 18:34:32.605377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.727 ms 00:30:39.502 [2024-11-26 18:34:32.605385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.502 [2024-11-26 18:34:32.639118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.502 [2024-11-26 18:34:32.639229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:39.502 [2024-11-26 18:34:32.639244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.764 ms 00:30:39.502 [2024-11-26 18:34:32.639250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.502 [2024-11-26 18:34:32.672971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.502 [2024-11-26 18:34:32.673005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:39.502 [2024-11-26 18:34:32.673016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.752 ms 00:30:39.502 [2024-11-26 18:34:32.673023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.502 [2024-11-26 18:34:32.705060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.502 [2024-11-26 18:34:32.705147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:39.502 [2024-11-26 18:34:32.705160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.035 ms 00:30:39.502 [2024-11-26 18:34:32.705167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.502 [2024-11-26 18:34:32.705198] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:39.502 [2024-11-26 18:34:32.705211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 109312 / 261120 wr_cnt: 1 state: open 00:30:39.502 [2024-11-26 18:34:32.705220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:39.502 [2024-11-26 18:34:32.705435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:39.503 [2024-11-26 18:34:32.705912] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:39.503 [2024-11-26 18:34:32.705918] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ed23b79-c687-4115-982f-0da7fc18820e 00:30:39.503 [2024-11-26 18:34:32.705925] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 109312 00:30:39.503 [2024-11-26 18:34:32.705931] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 110272 00:30:39.503 [2024-11-26 18:34:32.705938] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 109312 00:30:39.503 [2024-11-26 18:34:32.705961] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:30:39.503 [2024-11-26 18:34:32.705985] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:39.504 [2024-11-26 18:34:32.705992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:39.504 [2024-11-26 18:34:32.705999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:39.504 [2024-11-26 18:34:32.706005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:39.504 [2024-11-26 18:34:32.706011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:39.504 [2024-11-26 18:34:32.706017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.504 [2024-11-26 18:34:32.706024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:39.504 [2024-11-26 18:34:32.706032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:30:39.504 [2024-11-26 18:34:32.706039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.504 [2024-11-26 18:34:32.724478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.504 [2024-11-26 18:34:32.724515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:39.504 [2024-11-26 18:34:32.724532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.447 ms 00:30:39.504 [2024-11-26 18:34:32.724540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.504 [2024-11-26 18:34:32.725067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.504 [2024-11-26 18:34:32.725078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:39.504 [2024-11-26 18:34:32.725086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:30:39.504 [2024-11-26 18:34:32.725093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.504 [2024-11-26 18:34:32.771985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.504 [2024-11-26 18:34:32.772025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:39.504 [2024-11-26 18:34:32.772036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.504 [2024-11-26 18:34:32.772043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.504 [2024-11-26 18:34:32.772096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.504 [2024-11-26 18:34:32.772104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:39.504 [2024-11-26 18:34:32.772111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.504 [2024-11-26 18:34:32.772117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.504 [2024-11-26 18:34:32.772187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.504 [2024-11-26 18:34:32.772202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:39.504 [2024-11-26 18:34:32.772210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.504 [2024-11-26 18:34:32.772217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.504 [2024-11-26 18:34:32.772231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.504 [2024-11-26 18:34:32.772238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:39.504 [2024-11-26 18:34:32.772245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.504 [2024-11-26 18:34:32.772252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.763 [2024-11-26 18:34:32.890795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.763 [2024-11-26 18:34:32.890859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:39.763 [2024-11-26 18:34:32.890872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.763 [2024-11-26 18:34:32.890880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.763 [2024-11-26 18:34:32.988706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.763 [2024-11-26 18:34:32.988763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:39.763 [2024-11-26 18:34:32.988776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.763 [2024-11-26 18:34:32.988784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.763 [2024-11-26 18:34:32.988912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.763 [2024-11-26 18:34:32.988924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:39.763 [2024-11-26 18:34:32.988933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.763 [2024-11-26 18:34:32.988943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.763 [2024-11-26 18:34:32.988974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.763 [2024-11-26 18:34:32.988983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:39.763 [2024-11-26 18:34:32.988991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.763 [2024-11-26 18:34:32.988998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.763 [2024-11-26 18:34:32.989102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.763 [2024-11-26 18:34:32.989113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:39.763 [2024-11-26 18:34:32.989121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.763 [2024-11-26 18:34:32.989132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.763 [2024-11-26 18:34:32.989164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.763 [2024-11-26 18:34:32.989174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:39.763 [2024-11-26 18:34:32.989182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.763 [2024-11-26 18:34:32.989190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.763 [2024-11-26 18:34:32.989225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.763 [2024-11-26 18:34:32.989233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:39.763 [2024-11-26 18:34:32.989241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.763 [2024-11-26 18:34:32.989248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.763 [2024-11-26 18:34:32.989291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.763 [2024-11-26 18:34:32.989300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:39.763 [2024-11-26 18:34:32.989308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.763 [2024-11-26 18:34:32.989315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.763 [2024-11-26 18:34:32.989426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 633.899 ms, result 0 00:30:41.671 00:30:41.671 00:30:41.671 18:34:34 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:41.671 [2024-11-26 18:34:34.772569] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:30:41.671 [2024-11-26 18:34:34.772709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81421 ] 00:30:41.671 [2024-11-26 18:34:34.952348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.931 [2024-11-26 18:34:35.063168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.189 [2024-11-26 18:34:35.414553] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:42.189 [2024-11-26 18:34:35.414638] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:42.449 [2024-11-26 18:34:35.569866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.449 [2024-11-26 18:34:35.570005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:42.449 [2024-11-26 18:34:35.570021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:42.449 [2024-11-26 18:34:35.570030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.449 [2024-11-26 18:34:35.570079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.449 [2024-11-26 18:34:35.570090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:42.449 [2024-11-26 18:34:35.570099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:42.449 [2024-11-26 18:34:35.570106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.449 [2024-11-26 18:34:35.570125] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:42.449 [2024-11-26 18:34:35.571106] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:42.449 [2024-11-26 18:34:35.571137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.449 [2024-11-26 18:34:35.571146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:42.449 [2024-11-26 18:34:35.571154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:30:42.449 [2024-11-26 18:34:35.571163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.449 [2024-11-26 18:34:35.572533] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:42.449 [2024-11-26 18:34:35.591456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.449 [2024-11-26 18:34:35.591495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:42.449 [2024-11-26 18:34:35.591506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.961 ms 00:30:42.450 [2024-11-26 18:34:35.591514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.450 [2024-11-26 18:34:35.591574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.450 [2024-11-26 18:34:35.591584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:42.450 [2024-11-26 18:34:35.591592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:42.450 [2024-11-26 18:34:35.591600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.450 [2024-11-26 18:34:35.598218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.450 [2024-11-26 18:34:35.598247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:42.450 [2024-11-26 18:34:35.598256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.549 ms 00:30:42.450 [2024-11-26 18:34:35.598266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.450 [2024-11-26 18:34:35.598334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.450 [2024-11-26 18:34:35.598346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:42.450 [2024-11-26 18:34:35.598355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:42.450 [2024-11-26 18:34:35.598362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.450 [2024-11-26 18:34:35.598402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.450 [2024-11-26 18:34:35.598412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:42.450 [2024-11-26 18:34:35.598421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:42.450 [2024-11-26 18:34:35.598428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.450 [2024-11-26 18:34:35.598453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:42.450 [2024-11-26 18:34:35.603291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.450 [2024-11-26 18:34:35.603343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:42.450 [2024-11-26 18:34:35.603358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.854 ms 00:30:42.450 [2024-11-26 18:34:35.603366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.450 [2024-11-26 18:34:35.603394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.450 [2024-11-26 18:34:35.603403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:42.450 [2024-11-26 18:34:35.603411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:42.450 [2024-11-26 18:34:35.603420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.450 [2024-11-26 18:34:35.603462] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:42.450 [2024-11-26 18:34:35.603481] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:42.450 [2024-11-26 18:34:35.603513] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:42.450 [2024-11-26 18:34:35.603531] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:42.450 [2024-11-26 18:34:35.603635] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:42.450 [2024-11-26 18:34:35.603647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:42.450 [2024-11-26 18:34:35.603658] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:42.450 [2024-11-26 18:34:35.603668] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:42.450 [2024-11-26 18:34:35.603676] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:42.450 [2024-11-26 18:34:35.603684] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:42.450 [2024-11-26 18:34:35.603694] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:42.450 [2024-11-26 18:34:35.603705] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:42.450 [2024-11-26 18:34:35.603714] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:42.450 [2024-11-26 18:34:35.603722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.450 [2024-11-26 18:34:35.603729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:42.450 [2024-11-26 18:34:35.603738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:30:42.450 [2024-11-26 18:34:35.603746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.450 [2024-11-26 18:34:35.603812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.450 [2024-11-26 18:34:35.603821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:42.450 [2024-11-26 18:34:35.603828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:42.450 [2024-11-26 18:34:35.603836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.450 [2024-11-26 18:34:35.603926] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:42.450 [2024-11-26 18:34:35.603940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:42.450 [2024-11-26 18:34:35.603949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:42.450 [2024-11-26 18:34:35.603956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.450 [2024-11-26 18:34:35.603964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:42.450 [2024-11-26 18:34:35.603971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:42.450 [2024-11-26 18:34:35.603979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:42.450 [2024-11-26 18:34:35.603986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:42.450 [2024-11-26 18:34:35.603994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:42.450 [2024-11-26 18:34:35.604009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:42.450 [2024-11-26 18:34:35.604016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:42.450 [2024-11-26 18:34:35.604023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:42.450 [2024-11-26 18:34:35.604038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:42.450 [2024-11-26 18:34:35.604046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:42.450 [2024-11-26 18:34:35.604053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:42.450 [2024-11-26 18:34:35.604067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:42.450 [2024-11-26 18:34:35.604074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:42.450 [2024-11-26 18:34:35.604087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:42.450 [2024-11-26 18:34:35.604101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:42.450 [2024-11-26 18:34:35.604108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:42.450 [2024-11-26 18:34:35.604121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:42.450 [2024-11-26 18:34:35.604127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:42.450 [2024-11-26 18:34:35.604140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:42.450 [2024-11-26 18:34:35.604147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:42.450 [2024-11-26 18:34:35.604160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:42.450 [2024-11-26 18:34:35.604167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:42.450 [2024-11-26 18:34:35.604181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:42.450 [2024-11-26 18:34:35.604187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:42.450 [2024-11-26 18:34:35.604193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:42.450 [2024-11-26 18:34:35.604199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:42.450 [2024-11-26 18:34:35.604205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:42.450 [2024-11-26 18:34:35.604211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:42.450 [2024-11-26 18:34:35.604224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:42.450 [2024-11-26 18:34:35.604230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604236] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:42.450 [2024-11-26 18:34:35.604243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:42.450 [2024-11-26 18:34:35.604249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:42.450 [2024-11-26 18:34:35.604257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.450 [2024-11-26 18:34:35.604264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:42.450 [2024-11-26 18:34:35.604270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:42.450 [2024-11-26 18:34:35.604276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:42.450 [2024-11-26 18:34:35.604283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:42.450 [2024-11-26 18:34:35.604289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:42.450 [2024-11-26 18:34:35.604296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:42.450 [2024-11-26 18:34:35.604303] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:42.451 [2024-11-26 18:34:35.604311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:42.451 [2024-11-26 18:34:35.604322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:42.451 [2024-11-26 18:34:35.604329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:42.451 [2024-11-26 18:34:35.604336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:42.451 [2024-11-26 18:34:35.604342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:42.451 [2024-11-26 18:34:35.604349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:42.451 [2024-11-26 18:34:35.604358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:42.451 [2024-11-26 18:34:35.604365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:42.451 [2024-11-26 18:34:35.604372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:42.451 [2024-11-26 18:34:35.604379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:42.451 [2024-11-26 18:34:35.604386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:42.451 [2024-11-26 18:34:35.604394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:42.451 [2024-11-26 18:34:35.604400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:42.451 [2024-11-26 18:34:35.604408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:42.451 [2024-11-26 18:34:35.604415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:42.451 [2024-11-26 18:34:35.604421] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:42.451 [2024-11-26 18:34:35.604429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:42.451 [2024-11-26 18:34:35.604436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:42.451 [2024-11-26 18:34:35.604443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:42.451 [2024-11-26 18:34:35.604450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:42.451 [2024-11-26 18:34:35.604458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:42.451 [2024-11-26 18:34:35.604466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.451 [2024-11-26 18:34:35.604473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:42.451 [2024-11-26 18:34:35.604481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:30:42.451 [2024-11-26 18:34:35.604488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.451 [2024-11-26 18:34:35.643190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.451 [2024-11-26 18:34:35.643234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:42.451 [2024-11-26 18:34:35.643246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.730 ms 00:30:42.451 [2024-11-26 18:34:35.643258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.451 [2024-11-26 18:34:35.643344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.451 [2024-11-26 18:34:35.643354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:42.451 [2024-11-26 18:34:35.643363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:42.451 [2024-11-26 18:34:35.643370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.451 [2024-11-26 18:34:35.699384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.451 [2024-11-26 18:34:35.699423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:42.451 [2024-11-26 18:34:35.699435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.055 ms 00:30:42.451 [2024-11-26 18:34:35.699443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.451 [2024-11-26 18:34:35.699486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.451 [2024-11-26 18:34:35.699495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:42.451 [2024-11-26 18:34:35.699506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:42.451 [2024-11-26 18:34:35.699513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.451 [2024-11-26 18:34:35.700008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.451 [2024-11-26 18:34:35.700028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:42.451 [2024-11-26 18:34:35.700038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:30:42.451 [2024-11-26 18:34:35.700046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.451 [2024-11-26 18:34:35.700156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.451 [2024-11-26 18:34:35.700170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:42.451 [2024-11-26 18:34:35.700184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:30:42.451 [2024-11-26 18:34:35.700191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.451 [2024-11-26 18:34:35.718392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.451 [2024-11-26 18:34:35.718516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:42.451 [2024-11-26 18:34:35.718531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.215 ms 00:30:42.451 [2024-11-26 18:34:35.718539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.451 [2024-11-26 18:34:35.737221] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:42.451 [2024-11-26 18:34:35.737302] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:42.451 [2024-11-26 18:34:35.737316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.451 [2024-11-26 18:34:35.737325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:42.451 [2024-11-26 18:34:35.737334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.692 ms 00:30:42.451 [2024-11-26 18:34:35.737341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.451 [2024-11-26 18:34:35.765526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.451 [2024-11-26 18:34:35.765564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:42.451 [2024-11-26 18:34:35.765575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.201 ms 00:30:42.451 [2024-11-26 18:34:35.765583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.783017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.783102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:42.711 [2024-11-26 18:34:35.783115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.420 ms 00:30:42.711 [2024-11-26 18:34:35.783123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.800408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.800441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:42.711 [2024-11-26 18:34:35.800451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.284 ms 00:30:42.711 [2024-11-26 18:34:35.800458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.801185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.801217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:42.711 [2024-11-26 18:34:35.801230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:30:42.711 [2024-11-26 18:34:35.801238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.884009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.884070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:42.711 [2024-11-26 18:34:35.884090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.911 ms 00:30:42.711 [2024-11-26 18:34:35.884098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.896955] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:42.711 [2024-11-26 18:34:35.900234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.900267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:42.711 [2024-11-26 18:34:35.900281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.109 ms 00:30:42.711 [2024-11-26 18:34:35.900291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.900383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.900397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:42.711 [2024-11-26 18:34:35.900410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:42.711 [2024-11-26 18:34:35.900419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.902122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.902161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:42.711 [2024-11-26 18:34:35.902173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:30:42.711 [2024-11-26 18:34:35.902183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.902218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.902230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:42.711 [2024-11-26 18:34:35.902239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:42.711 [2024-11-26 18:34:35.902249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.902290] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:42.711 [2024-11-26 18:34:35.902302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.902311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:42.711 [2024-11-26 18:34:35.902321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:42.711 [2024-11-26 18:34:35.902330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.938958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.938994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:42.711 [2024-11-26 18:34:35.939011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.679 ms 00:30:42.711 [2024-11-26 18:34:35.939018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.711 [2024-11-26 18:34:35.939086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.711 [2024-11-26 18:34:35.939096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:42.712 [2024-11-26 18:34:35.939104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:42.712 [2024-11-26 18:34:35.939111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.712 [2024-11-26 18:34:35.940193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.598 ms, result 0 00:30:44.091  [2024-11-26T18:34:38.363Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-26T18:34:39.300Z] Copying: 58/1024 [MB] (31 MBps) [2024-11-26T18:34:40.238Z] Copying: 89/1024 [MB] (31 MBps) [2024-11-26T18:34:41.176Z] Copying: 118/1024 [MB] (28 MBps) [2024-11-26T18:34:42.114Z] Copying: 145/1024 [MB] (27 MBps) [2024-11-26T18:34:43.530Z] Copying: 173/1024 [MB] (27 MBps) [2024-11-26T18:34:44.100Z] Copying: 200/1024 [MB] (27 MBps) [2024-11-26T18:34:45.477Z] Copying: 228/1024 [MB] (27 MBps) [2024-11-26T18:34:46.416Z] Copying: 256/1024 [MB] (27 MBps) [2024-11-26T18:34:47.356Z] Copying: 283/1024 [MB] (27 MBps) [2024-11-26T18:34:48.294Z] Copying: 310/1024 [MB] (27 MBps) [2024-11-26T18:34:49.231Z] Copying: 338/1024 [MB] (27 MBps) [2024-11-26T18:34:50.166Z] Copying: 366/1024 [MB] (27 MBps) [2024-11-26T18:34:51.128Z] Copying: 393/1024 [MB] (27 MBps) [2024-11-26T18:34:52.506Z] Copying: 423/1024 [MB] (29 MBps) [2024-11-26T18:34:53.440Z] Copying: 451/1024 [MB] (27 MBps) [2024-11-26T18:34:54.375Z] Copying: 479/1024 [MB] (28 MBps) [2024-11-26T18:34:55.309Z] Copying: 507/1024 [MB] (27 MBps) [2024-11-26T18:34:56.243Z] Copying: 534/1024 [MB] (27 MBps) [2024-11-26T18:34:57.177Z] Copying: 562/1024 [MB] (27 MBps) [2024-11-26T18:34:58.114Z] Copying: 590/1024 [MB] (28 MBps) [2024-11-26T18:34:59.068Z] Copying: 618/1024 [MB] (27 MBps) [2024-11-26T18:35:00.447Z] Copying: 645/1024 [MB] (27 MBps) [2024-11-26T18:35:01.381Z] Copying: 672/1024 [MB] (26 MBps) [2024-11-26T18:35:02.318Z] Copying: 699/1024 [MB] (27 MBps) [2024-11-26T18:35:03.251Z] Copying: 726/1024 [MB] (26 MBps) [2024-11-26T18:35:04.187Z] Copying: 753/1024 [MB] (27 MBps) [2024-11-26T18:35:05.125Z] Copying: 781/1024 [MB] (27 MBps) [2024-11-26T18:35:06.095Z] Copying: 808/1024 [MB] (27 MBps) [2024-11-26T18:35:07.473Z] Copying: 836/1024 [MB] (27 MBps) [2024-11-26T18:35:08.411Z] Copying: 865/1024 [MB] (28 MBps) [2024-11-26T18:35:09.348Z] Copying: 893/1024 [MB] (28 MBps) [2024-11-26T18:35:10.285Z] Copying: 922/1024 [MB] (28 MBps) [2024-11-26T18:35:11.221Z] Copying: 950/1024 [MB] (28 MBps) [2024-11-26T18:35:12.160Z] Copying: 978/1024 [MB] (27 MBps) [2024-11-26T18:35:12.729Z] Copying: 1005/1024 [MB] (27 MBps) [2024-11-26T18:35:12.989Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-26 18:35:12.851154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.654 [2024-11-26 18:35:12.851232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:19.654 [2024-11-26 18:35:12.851262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:19.654 [2024-11-26 18:35:12.851277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.654 [2024-11-26 18:35:12.851312] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:19.654 [2024-11-26 18:35:12.859576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.654 [2024-11-26 18:35:12.859643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:19.654 [2024-11-26 18:35:12.859662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.253 ms 00:31:19.654 [2024-11-26 18:35:12.859677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.654 [2024-11-26 18:35:12.860029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.654 [2024-11-26 18:35:12.860049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:19.654 [2024-11-26 18:35:12.860065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:31:19.654 [2024-11-26 18:35:12.860083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.654 [2024-11-26 18:35:12.865357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.654 [2024-11-26 18:35:12.865401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:19.654 [2024-11-26 18:35:12.865414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.259 ms 00:31:19.654 [2024-11-26 18:35:12.865424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.654 [2024-11-26 18:35:12.871439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.654 [2024-11-26 18:35:12.871472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:19.654 [2024-11-26 18:35:12.871481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.991 ms 00:31:19.654 [2024-11-26 18:35:12.871493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.654 [2024-11-26 18:35:12.907845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.654 [2024-11-26 18:35:12.907880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:19.654 [2024-11-26 18:35:12.907891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.378 ms 00:31:19.654 [2024-11-26 18:35:12.907898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.654 [2024-11-26 18:35:12.927918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.654 [2024-11-26 18:35:12.927953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:19.654 [2024-11-26 18:35:12.927964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.024 ms 00:31:19.654 [2024-11-26 18:35:12.927971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.914 [2024-11-26 18:35:13.051242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.914 [2024-11-26 18:35:13.051322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:19.914 [2024-11-26 18:35:13.051344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 123.469 ms 00:31:19.914 [2024-11-26 18:35:13.051352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.914 [2024-11-26 18:35:13.087875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.914 [2024-11-26 18:35:13.087925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:19.914 [2024-11-26 18:35:13.087938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.571 ms 00:31:19.914 [2024-11-26 18:35:13.087945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.914 [2024-11-26 18:35:13.122217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.914 [2024-11-26 18:35:13.122257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:19.914 [2024-11-26 18:35:13.122268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.301 ms 00:31:19.914 [2024-11-26 18:35:13.122275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.914 [2024-11-26 18:35:13.155967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.914 [2024-11-26 18:35:13.156059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:19.914 [2024-11-26 18:35:13.156073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.724 ms 00:31:19.914 [2024-11-26 18:35:13.156080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.914 [2024-11-26 18:35:13.189057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.914 [2024-11-26 18:35:13.189090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:19.914 [2024-11-26 18:35:13.189101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.957 ms 00:31:19.914 [2024-11-26 18:35:13.189108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.914 [2024-11-26 18:35:13.189140] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:19.914 [2024-11-26 18:35:13.189154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:31:19.914 [2024-11-26 18:35:13.189163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:19.914 [2024-11-26 18:35:13.189170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:19.914 [2024-11-26 18:35:13.189178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:19.914 [2024-11-26 18:35:13.189186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:19.915 [2024-11-26 18:35:13.189864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:19.916 [2024-11-26 18:35:13.189872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:19.916 [2024-11-26 18:35:13.189879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:19.916 [2024-11-26 18:35:13.189895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:19.916 [2024-11-26 18:35:13.189902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:19.916 [2024-11-26 18:35:13.189909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:19.916 [2024-11-26 18:35:13.189916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:19.916 [2024-11-26 18:35:13.189930] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:19.916 [2024-11-26 18:35:13.189937] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ed23b79-c687-4115-982f-0da7fc18820e 00:31:19.916 [2024-11-26 18:35:13.189945] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:31:19.916 [2024-11-26 18:35:13.189952] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 22720 00:31:19.916 [2024-11-26 18:35:13.189958] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 21760 00:31:19.916 [2024-11-26 18:35:13.189966] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0441 00:31:19.916 [2024-11-26 18:35:13.189977] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:19.916 [2024-11-26 18:35:13.189996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:19.916 [2024-11-26 18:35:13.190003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:19.916 [2024-11-26 18:35:13.190010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:19.916 [2024-11-26 18:35:13.190016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:19.916 [2024-11-26 18:35:13.190023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.916 [2024-11-26 18:35:13.190030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:19.916 [2024-11-26 18:35:13.190037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:31:19.916 [2024-11-26 18:35:13.190045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.916 [2024-11-26 18:35:13.208556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.916 [2024-11-26 18:35:13.208587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:19.916 [2024-11-26 18:35:13.208601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.518 ms 00:31:19.916 [2024-11-26 18:35:13.208609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.916 [2024-11-26 18:35:13.209147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.916 [2024-11-26 18:35:13.209162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:19.916 [2024-11-26 18:35:13.209170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:31:19.916 [2024-11-26 18:35:13.209178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.258440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.258483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:20.175 [2024-11-26 18:35:13.258494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.258503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.258559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.258568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:20.175 [2024-11-26 18:35:13.258575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.258582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.258671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.258683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:20.175 [2024-11-26 18:35:13.258695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.258703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.258744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.258753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:20.175 [2024-11-26 18:35:13.258761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.258768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.381055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.381120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:20.175 [2024-11-26 18:35:13.381132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.381140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.478713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.478763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:20.175 [2024-11-26 18:35:13.478774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.478782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.478862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.478871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:20.175 [2024-11-26 18:35:13.478878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.478890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.478919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.478928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:20.175 [2024-11-26 18:35:13.478935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.478942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.479034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.479045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:20.175 [2024-11-26 18:35:13.479053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.479060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.479095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.479105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:20.175 [2024-11-26 18:35:13.479112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.479119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.479153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.479162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:20.175 [2024-11-26 18:35:13.479169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.479177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.479218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.175 [2024-11-26 18:35:13.479227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:20.175 [2024-11-26 18:35:13.479234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.175 [2024-11-26 18:35:13.479241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.175 [2024-11-26 18:35:13.479379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 629.401 ms, result 0 00:31:21.554 00:31:21.554 00:31:21.554 18:35:14 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:22.932 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:22.932 18:35:16 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:22.932 18:35:16 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:31:22.932 18:35:16 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:23.192 18:35:16 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:23.192 18:35:16 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:23.192 Process with pid 79872 is not found 00:31:23.192 18:35:16 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79872 00:31:23.192 18:35:16 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79872 ']' 00:31:23.192 18:35:16 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79872 00:31:23.192 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79872) - No such process 00:31:23.192 18:35:16 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79872 is not found' 00:31:23.192 18:35:16 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:31:23.192 Remove shared memory files 00:31:23.192 18:35:16 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:23.192 18:35:16 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:31:23.192 18:35:16 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:31:23.192 18:35:16 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:31:23.192 18:35:16 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:23.192 18:35:16 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:31:23.192 ************************************ 00:31:23.192 END TEST ftl_restore 00:31:23.192 ************************************ 00:31:23.192 00:31:23.192 real 3m9.834s 00:31:23.192 user 2m57.634s 00:31:23.192 sys 0m12.603s 00:31:23.192 18:35:16 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:23.192 18:35:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:31:23.192 18:35:16 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:23.192 18:35:16 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:23.192 18:35:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:23.192 18:35:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:23.192 ************************************ 00:31:23.192 START TEST ftl_dirty_shutdown 00:31:23.192 ************************************ 00:31:23.192 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:23.192 * Looking for test storage... 00:31:23.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:23.192 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:23.192 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:31:23.192 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.452 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:23.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.452 --rc genhtml_branch_coverage=1 00:31:23.452 --rc genhtml_function_coverage=1 00:31:23.452 --rc genhtml_legend=1 00:31:23.452 --rc geninfo_all_blocks=1 00:31:23.452 --rc geninfo_unexecuted_blocks=1 00:31:23.452 00:31:23.453 ' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:23.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.453 --rc genhtml_branch_coverage=1 00:31:23.453 --rc genhtml_function_coverage=1 00:31:23.453 --rc genhtml_legend=1 00:31:23.453 --rc geninfo_all_blocks=1 00:31:23.453 --rc geninfo_unexecuted_blocks=1 00:31:23.453 00:31:23.453 ' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:23.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.453 --rc genhtml_branch_coverage=1 00:31:23.453 --rc genhtml_function_coverage=1 00:31:23.453 --rc genhtml_legend=1 00:31:23.453 --rc geninfo_all_blocks=1 00:31:23.453 --rc geninfo_unexecuted_blocks=1 00:31:23.453 00:31:23.453 ' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:23.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.453 --rc genhtml_branch_coverage=1 00:31:23.453 --rc genhtml_function_coverage=1 00:31:23.453 --rc genhtml_legend=1 00:31:23.453 --rc geninfo_all_blocks=1 00:31:23.453 --rc geninfo_unexecuted_blocks=1 00:31:23.453 00:31:23.453 ' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81899 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81899 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81899 ']' 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:23.453 18:35:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:23.453 [2024-11-26 18:35:16.752930] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:31:23.453 [2024-11-26 18:35:16.753138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81899 ] 00:31:23.713 [2024-11-26 18:35:16.924244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.713 [2024-11-26 18:35:17.033049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.653 18:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.653 18:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:24.653 18:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:24.653 18:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:31:24.653 18:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:24.653 18:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:31:24.653 18:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:24.653 18:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:24.912 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:24.912 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:24.912 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:24.912 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:24.912 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:24.912 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:24.912 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:24.912 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:25.171 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:25.171 { 00:31:25.171 "name": "nvme0n1", 00:31:25.171 "aliases": [ 00:31:25.171 "f5df2778-894e-42ad-bceb-5ed002a75279" 00:31:25.171 ], 00:31:25.171 "product_name": "NVMe disk", 00:31:25.171 "block_size": 4096, 00:31:25.171 "num_blocks": 1310720, 00:31:25.171 "uuid": "f5df2778-894e-42ad-bceb-5ed002a75279", 00:31:25.171 "numa_id": -1, 00:31:25.171 "assigned_rate_limits": { 00:31:25.171 "rw_ios_per_sec": 0, 00:31:25.171 "rw_mbytes_per_sec": 0, 00:31:25.171 "r_mbytes_per_sec": 0, 00:31:25.171 "w_mbytes_per_sec": 0 00:31:25.171 }, 00:31:25.171 "claimed": true, 00:31:25.171 "claim_type": "read_many_write_one", 00:31:25.172 "zoned": false, 00:31:25.172 "supported_io_types": { 00:31:25.172 "read": true, 00:31:25.172 "write": true, 00:31:25.172 "unmap": true, 00:31:25.172 "flush": true, 00:31:25.172 "reset": true, 00:31:25.172 "nvme_admin": true, 00:31:25.172 "nvme_io": true, 00:31:25.172 "nvme_io_md": false, 00:31:25.172 "write_zeroes": true, 00:31:25.172 "zcopy": false, 00:31:25.172 "get_zone_info": false, 00:31:25.172 "zone_management": false, 00:31:25.172 "zone_append": false, 00:31:25.172 "compare": true, 00:31:25.172 "compare_and_write": false, 00:31:25.172 "abort": true, 00:31:25.172 "seek_hole": false, 00:31:25.172 "seek_data": false, 00:31:25.172 "copy": true, 00:31:25.172 "nvme_iov_md": false 00:31:25.172 }, 00:31:25.172 "driver_specific": { 00:31:25.172 "nvme": [ 00:31:25.172 { 00:31:25.172 "pci_address": "0000:00:11.0", 00:31:25.172 "trid": { 00:31:25.172 "trtype": "PCIe", 00:31:25.172 "traddr": "0000:00:11.0" 00:31:25.172 }, 00:31:25.172 "ctrlr_data": { 00:31:25.172 "cntlid": 0, 00:31:25.172 "vendor_id": "0x1b36", 00:31:25.172 "model_number": "QEMU NVMe Ctrl", 00:31:25.172 "serial_number": "12341", 00:31:25.172 "firmware_revision": "8.0.0", 00:31:25.172 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:25.172 "oacs": { 00:31:25.172 "security": 0, 00:31:25.172 "format": 1, 00:31:25.172 "firmware": 0, 00:31:25.172 "ns_manage": 1 00:31:25.172 }, 00:31:25.172 "multi_ctrlr": false, 00:31:25.172 "ana_reporting": false 00:31:25.172 }, 00:31:25.172 "vs": { 00:31:25.172 "nvme_version": "1.4" 00:31:25.172 }, 00:31:25.172 "ns_data": { 00:31:25.172 "id": 1, 00:31:25.172 "can_share": false 00:31:25.172 } 00:31:25.172 } 00:31:25.172 ], 00:31:25.172 "mp_policy": "active_passive" 00:31:25.172 } 00:31:25.172 } 00:31:25.172 ]' 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:25.172 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:25.431 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b2752b9c-73b9-4919-b206-df22f8045165 00:31:25.431 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:25.431 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2752b9c-73b9-4919-b206-df22f8045165 00:31:25.691 18:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:25.951 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a388e963-ea62-4015-81d6-c88329105289 00:31:25.951 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a388e963-ea62-4015-81d6-c88329105289 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:26.212 { 00:31:26.212 "name": "3b1561fb-f682-4823-a88d-e06f6193ead9", 00:31:26.212 "aliases": [ 00:31:26.212 "lvs/nvme0n1p0" 00:31:26.212 ], 00:31:26.212 "product_name": "Logical Volume", 00:31:26.212 "block_size": 4096, 00:31:26.212 "num_blocks": 26476544, 00:31:26.212 "uuid": "3b1561fb-f682-4823-a88d-e06f6193ead9", 00:31:26.212 "assigned_rate_limits": { 00:31:26.212 "rw_ios_per_sec": 0, 00:31:26.212 "rw_mbytes_per_sec": 0, 00:31:26.212 "r_mbytes_per_sec": 0, 00:31:26.212 "w_mbytes_per_sec": 0 00:31:26.212 }, 00:31:26.212 "claimed": false, 00:31:26.212 "zoned": false, 00:31:26.212 "supported_io_types": { 00:31:26.212 "read": true, 00:31:26.212 "write": true, 00:31:26.212 "unmap": true, 00:31:26.212 "flush": false, 00:31:26.212 "reset": true, 00:31:26.212 "nvme_admin": false, 00:31:26.212 "nvme_io": false, 00:31:26.212 "nvme_io_md": false, 00:31:26.212 "write_zeroes": true, 00:31:26.212 "zcopy": false, 00:31:26.212 "get_zone_info": false, 00:31:26.212 "zone_management": false, 00:31:26.212 "zone_append": false, 00:31:26.212 "compare": false, 00:31:26.212 "compare_and_write": false, 00:31:26.212 "abort": false, 00:31:26.212 "seek_hole": true, 00:31:26.212 "seek_data": true, 00:31:26.212 "copy": false, 00:31:26.212 "nvme_iov_md": false 00:31:26.212 }, 00:31:26.212 "driver_specific": { 00:31:26.212 "lvol": { 00:31:26.212 "lvol_store_uuid": "a388e963-ea62-4015-81d6-c88329105289", 00:31:26.212 "base_bdev": "nvme0n1", 00:31:26.212 "thin_provision": true, 00:31:26.212 "num_allocated_clusters": 0, 00:31:26.212 "snapshot": false, 00:31:26.212 "clone": false, 00:31:26.212 "esnap_clone": false 00:31:26.212 } 00:31:26.212 } 00:31:26.212 } 00:31:26.212 ]' 00:31:26.212 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:26.472 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:26.472 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:26.472 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:26.472 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:26.472 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:26.472 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:31:26.472 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:26.472 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:26.731 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:26.731 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:26.731 18:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:26.731 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:26.731 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:26.731 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:26.731 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:26.731 18:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:26.990 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:26.990 { 00:31:26.990 "name": "3b1561fb-f682-4823-a88d-e06f6193ead9", 00:31:26.990 "aliases": [ 00:31:26.990 "lvs/nvme0n1p0" 00:31:26.990 ], 00:31:26.990 "product_name": "Logical Volume", 00:31:26.990 "block_size": 4096, 00:31:26.990 "num_blocks": 26476544, 00:31:26.990 "uuid": "3b1561fb-f682-4823-a88d-e06f6193ead9", 00:31:26.990 "assigned_rate_limits": { 00:31:26.990 "rw_ios_per_sec": 0, 00:31:26.990 "rw_mbytes_per_sec": 0, 00:31:26.990 "r_mbytes_per_sec": 0, 00:31:26.990 "w_mbytes_per_sec": 0 00:31:26.990 }, 00:31:26.990 "claimed": false, 00:31:26.990 "zoned": false, 00:31:26.990 "supported_io_types": { 00:31:26.990 "read": true, 00:31:26.990 "write": true, 00:31:26.990 "unmap": true, 00:31:26.990 "flush": false, 00:31:26.990 "reset": true, 00:31:26.990 "nvme_admin": false, 00:31:26.990 "nvme_io": false, 00:31:26.990 "nvme_io_md": false, 00:31:26.990 "write_zeroes": true, 00:31:26.990 "zcopy": false, 00:31:26.990 "get_zone_info": false, 00:31:26.990 "zone_management": false, 00:31:26.990 "zone_append": false, 00:31:26.990 "compare": false, 00:31:26.990 "compare_and_write": false, 00:31:26.990 "abort": false, 00:31:26.990 "seek_hole": true, 00:31:26.990 "seek_data": true, 00:31:26.990 "copy": false, 00:31:26.990 "nvme_iov_md": false 00:31:26.990 }, 00:31:26.990 "driver_specific": { 00:31:26.990 "lvol": { 00:31:26.990 "lvol_store_uuid": "a388e963-ea62-4015-81d6-c88329105289", 00:31:26.990 "base_bdev": "nvme0n1", 00:31:26.990 "thin_provision": true, 00:31:26.990 "num_allocated_clusters": 0, 00:31:26.990 "snapshot": false, 00:31:26.990 "clone": false, 00:31:26.990 "esnap_clone": false 00:31:26.990 } 00:31:26.990 } 00:31:26.990 } 00:31:26.990 ]' 00:31:26.990 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:26.990 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:26.990 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:26.990 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:26.990 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:26.990 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:26.990 18:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:31:26.990 18:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:27.260 18:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:31:27.260 18:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:27.260 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:27.260 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:27.260 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:31:27.260 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:31:27.260 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3b1561fb-f682-4823-a88d-e06f6193ead9 00:31:27.534 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:27.534 { 00:31:27.534 "name": "3b1561fb-f682-4823-a88d-e06f6193ead9", 00:31:27.534 "aliases": [ 00:31:27.534 "lvs/nvme0n1p0" 00:31:27.534 ], 00:31:27.534 "product_name": "Logical Volume", 00:31:27.534 "block_size": 4096, 00:31:27.534 "num_blocks": 26476544, 00:31:27.534 "uuid": "3b1561fb-f682-4823-a88d-e06f6193ead9", 00:31:27.534 "assigned_rate_limits": { 00:31:27.534 "rw_ios_per_sec": 0, 00:31:27.534 "rw_mbytes_per_sec": 0, 00:31:27.534 "r_mbytes_per_sec": 0, 00:31:27.534 "w_mbytes_per_sec": 0 00:31:27.534 }, 00:31:27.534 "claimed": false, 00:31:27.534 "zoned": false, 00:31:27.534 "supported_io_types": { 00:31:27.534 "read": true, 00:31:27.534 "write": true, 00:31:27.534 "unmap": true, 00:31:27.534 "flush": false, 00:31:27.534 "reset": true, 00:31:27.534 "nvme_admin": false, 00:31:27.534 "nvme_io": false, 00:31:27.534 "nvme_io_md": false, 00:31:27.534 "write_zeroes": true, 00:31:27.534 "zcopy": false, 00:31:27.534 "get_zone_info": false, 00:31:27.534 "zone_management": false, 00:31:27.534 "zone_append": false, 00:31:27.534 "compare": false, 00:31:27.534 "compare_and_write": false, 00:31:27.534 "abort": false, 00:31:27.534 "seek_hole": true, 00:31:27.534 "seek_data": true, 00:31:27.534 "copy": false, 00:31:27.534 "nvme_iov_md": false 00:31:27.534 }, 00:31:27.534 "driver_specific": { 00:31:27.534 "lvol": { 00:31:27.534 "lvol_store_uuid": "a388e963-ea62-4015-81d6-c88329105289", 00:31:27.534 "base_bdev": "nvme0n1", 00:31:27.534 "thin_provision": true, 00:31:27.534 "num_allocated_clusters": 0, 00:31:27.534 "snapshot": false, 00:31:27.535 "clone": false, 00:31:27.535 "esnap_clone": false 00:31:27.535 } 00:31:27.535 } 00:31:27.535 } 00:31:27.535 ]' 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3b1561fb-f682-4823-a88d-e06f6193ead9 --l2p_dram_limit 10' 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:27.535 18:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3b1561fb-f682-4823-a88d-e06f6193ead9 --l2p_dram_limit 10 -c nvc0n1p0 00:31:27.803 [2024-11-26 18:35:20.973177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.803 [2024-11-26 18:35:20.973293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:27.803 [2024-11-26 18:35:20.973339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:27.803 [2024-11-26 18:35:20.973361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.803 [2024-11-26 18:35:20.973476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.803 [2024-11-26 18:35:20.973528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:27.803 [2024-11-26 18:35:20.973560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:27.803 [2024-11-26 18:35:20.973581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.803 [2024-11-26 18:35:20.973641] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:27.803 [2024-11-26 18:35:20.974661] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:27.803 [2024-11-26 18:35:20.974734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.803 [2024-11-26 18:35:20.974768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:27.803 [2024-11-26 18:35:20.974793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:31:27.803 [2024-11-26 18:35:20.974815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.803 [2024-11-26 18:35:20.974910] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b51e1e54-54e8-46dc-809d-0771feafeb8a 00:31:27.803 [2024-11-26 18:35:20.976318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.803 [2024-11-26 18:35:20.976385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:27.803 [2024-11-26 18:35:20.976418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:27.803 [2024-11-26 18:35:20.976442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.803 [2024-11-26 18:35:20.983777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.803 [2024-11-26 18:35:20.983857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:27.803 [2024-11-26 18:35:20.983888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.288 ms 00:31:27.804 [2024-11-26 18:35:20.983910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.804 [2024-11-26 18:35:20.984017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.804 [2024-11-26 18:35:20.984061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:27.804 [2024-11-26 18:35:20.984072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:31:27.804 [2024-11-26 18:35:20.984085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.804 [2024-11-26 18:35:20.984156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.804 [2024-11-26 18:35:20.984168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:27.804 [2024-11-26 18:35:20.984179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:27.804 [2024-11-26 18:35:20.984188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.804 [2024-11-26 18:35:20.984212] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:27.804 [2024-11-26 18:35:20.989121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.804 [2024-11-26 18:35:20.989154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:27.804 [2024-11-26 18:35:20.989167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.924 ms 00:31:27.804 [2024-11-26 18:35:20.989174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.804 [2024-11-26 18:35:20.989207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.804 [2024-11-26 18:35:20.989215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:27.804 [2024-11-26 18:35:20.989224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:27.804 [2024-11-26 18:35:20.989232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.804 [2024-11-26 18:35:20.989265] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:27.804 [2024-11-26 18:35:20.989385] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:27.804 [2024-11-26 18:35:20.989401] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:27.804 [2024-11-26 18:35:20.989411] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:27.804 [2024-11-26 18:35:20.989422] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:27.804 [2024-11-26 18:35:20.989431] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:27.804 [2024-11-26 18:35:20.989441] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:27.804 [2024-11-26 18:35:20.989450] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:27.804 [2024-11-26 18:35:20.989460] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:27.804 [2024-11-26 18:35:20.989468] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:27.804 [2024-11-26 18:35:20.989478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.804 [2024-11-26 18:35:20.989497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:27.804 [2024-11-26 18:35:20.989508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:31:27.804 [2024-11-26 18:35:20.989515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.804 [2024-11-26 18:35:20.989589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.804 [2024-11-26 18:35:20.989597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:27.804 [2024-11-26 18:35:20.989606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:27.804 [2024-11-26 18:35:20.989613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.804 [2024-11-26 18:35:20.989728] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:27.804 [2024-11-26 18:35:20.989739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:27.804 [2024-11-26 18:35:20.989749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:27.804 [2024-11-26 18:35:20.989756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.804 [2024-11-26 18:35:20.989766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:27.804 [2024-11-26 18:35:20.989772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:27.804 [2024-11-26 18:35:20.989781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:27.804 [2024-11-26 18:35:20.989787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:27.804 [2024-11-26 18:35:20.989796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:27.804 [2024-11-26 18:35:20.989802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:27.804 [2024-11-26 18:35:20.989810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:27.804 [2024-11-26 18:35:20.989817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:27.804 [2024-11-26 18:35:20.989825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:27.804 [2024-11-26 18:35:20.989832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:27.804 [2024-11-26 18:35:20.989841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:27.804 [2024-11-26 18:35:20.989847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.804 [2024-11-26 18:35:20.989857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:27.804 [2024-11-26 18:35:20.989863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:27.804 [2024-11-26 18:35:20.989873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.804 [2024-11-26 18:35:20.989880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:27.804 [2024-11-26 18:35:20.989888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:27.804 [2024-11-26 18:35:20.989895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.804 [2024-11-26 18:35:20.989903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:27.804 [2024-11-26 18:35:20.989910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:27.804 [2024-11-26 18:35:20.989919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.804 [2024-11-26 18:35:20.989925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:27.804 [2024-11-26 18:35:20.989934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:27.804 [2024-11-26 18:35:20.989940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.804 [2024-11-26 18:35:20.989948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:27.804 [2024-11-26 18:35:20.989969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:27.804 [2024-11-26 18:35:20.989977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.804 [2024-11-26 18:35:20.989983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:27.804 [2024-11-26 18:35:20.989993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:27.804 [2024-11-26 18:35:20.989999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:27.804 [2024-11-26 18:35:20.990007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:27.804 [2024-11-26 18:35:20.990014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:27.804 [2024-11-26 18:35:20.990022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:27.804 [2024-11-26 18:35:20.990028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:27.804 [2024-11-26 18:35:20.990036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:27.804 [2024-11-26 18:35:20.990041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.804 [2024-11-26 18:35:20.990049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:27.804 [2024-11-26 18:35:20.990055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:27.804 [2024-11-26 18:35:20.990064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.804 [2024-11-26 18:35:20.990071] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:27.804 [2024-11-26 18:35:20.990079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:27.804 [2024-11-26 18:35:20.990087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:27.804 [2024-11-26 18:35:20.990096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.804 [2024-11-26 18:35:20.990103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:27.804 [2024-11-26 18:35:20.990113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:27.804 [2024-11-26 18:35:20.990120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:27.805 [2024-11-26 18:35:20.990128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:27.805 [2024-11-26 18:35:20.990135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:27.805 [2024-11-26 18:35:20.990143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:27.805 [2024-11-26 18:35:20.990153] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:27.805 [2024-11-26 18:35:20.990166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.805 [2024-11-26 18:35:20.990176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:27.805 [2024-11-26 18:35:20.990185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:27.805 [2024-11-26 18:35:20.990193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:27.805 [2024-11-26 18:35:20.990201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:27.805 [2024-11-26 18:35:20.990209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:27.805 [2024-11-26 18:35:20.990219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:27.805 [2024-11-26 18:35:20.990226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:27.805 [2024-11-26 18:35:20.990234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:27.805 [2024-11-26 18:35:20.990241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:27.805 [2024-11-26 18:35:20.990252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:27.805 [2024-11-26 18:35:20.990258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:27.805 [2024-11-26 18:35:20.990267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:27.805 [2024-11-26 18:35:20.990274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:27.805 [2024-11-26 18:35:20.990283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:27.805 [2024-11-26 18:35:20.990290] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:27.805 [2024-11-26 18:35:20.990299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.805 [2024-11-26 18:35:20.990306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:27.805 [2024-11-26 18:35:20.990315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:27.805 [2024-11-26 18:35:20.990321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:27.805 [2024-11-26 18:35:20.990329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:27.805 [2024-11-26 18:35:20.990337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.805 [2024-11-26 18:35:20.990346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:27.805 [2024-11-26 18:35:20.990355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:31:27.805 [2024-11-26 18:35:20.990363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.805 [2024-11-26 18:35:20.990400] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:27.805 [2024-11-26 18:35:20.990415] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:32.004 [2024-11-26 18:35:24.768886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.768949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:32.004 [2024-11-26 18:35:24.768964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3785.771 ms 00:31:32.004 [2024-11-26 18:35:24.768974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:24.808948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.809002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:32.004 [2024-11-26 18:35:24.809015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.766 ms 00:31:32.004 [2024-11-26 18:35:24.809026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:24.809165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.809180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:32.004 [2024-11-26 18:35:24.809189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:32.004 [2024-11-26 18:35:24.809203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:24.854949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.854997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:32.004 [2024-11-26 18:35:24.855009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.800 ms 00:31:32.004 [2024-11-26 18:35:24.855019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:24.855066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.855076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:32.004 [2024-11-26 18:35:24.855085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:32.004 [2024-11-26 18:35:24.855106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:24.855580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.855609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:32.004 [2024-11-26 18:35:24.855628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:31:32.004 [2024-11-26 18:35:24.855638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:24.855727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.855748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:32.004 [2024-11-26 18:35:24.855757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:32.004 [2024-11-26 18:35:24.855768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:24.875218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.875263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:32.004 [2024-11-26 18:35:24.875274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.470 ms 00:31:32.004 [2024-11-26 18:35:24.875283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:24.902222] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:32.004 [2024-11-26 18:35:24.905433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.905466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:32.004 [2024-11-26 18:35:24.905479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.120 ms 00:31:32.004 [2024-11-26 18:35:24.905488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:24.997311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.997370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:32.004 [2024-11-26 18:35:24.997387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.961 ms 00:31:32.004 [2024-11-26 18:35:24.997396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:24.997579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:24.997591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:32.004 [2024-11-26 18:35:24.997604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:31:32.004 [2024-11-26 18:35:24.997611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:25.033413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:25.033463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:32.004 [2024-11-26 18:35:25.033478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.809 ms 00:31:32.004 [2024-11-26 18:35:25.033486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:25.069757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:25.069807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:32.004 [2024-11-26 18:35:25.069823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.287 ms 00:31:32.004 [2024-11-26 18:35:25.069831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:25.070567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:25.070591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:32.004 [2024-11-26 18:35:25.070606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:31:32.004 [2024-11-26 18:35:25.070614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:25.172549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:25.172606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:32.004 [2024-11-26 18:35:25.172630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.061 ms 00:31:32.004 [2024-11-26 18:35:25.172640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:25.208923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:25.208985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:32.004 [2024-11-26 18:35:25.209002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.268 ms 00:31:32.004 [2024-11-26 18:35:25.209011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:25.246527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:25.246576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:32.004 [2024-11-26 18:35:25.246592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.528 ms 00:31:32.004 [2024-11-26 18:35:25.246600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:25.282594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:25.282648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:32.004 [2024-11-26 18:35:25.282663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.003 ms 00:31:32.004 [2024-11-26 18:35:25.282671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:25.282719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:25.282728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:32.004 [2024-11-26 18:35:25.282742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:32.004 [2024-11-26 18:35:25.282749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.004 [2024-11-26 18:35:25.282845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.004 [2024-11-26 18:35:25.282859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:32.004 [2024-11-26 18:35:25.282870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:32.004 [2024-11-26 18:35:25.282878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.005 [2024-11-26 18:35:25.283932] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4318.607 ms, result 0 00:31:32.005 { 00:31:32.005 "name": "ftl0", 00:31:32.005 "uuid": "b51e1e54-54e8-46dc-809d-0771feafeb8a" 00:31:32.005 } 00:31:32.005 18:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:31:32.005 18:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:32.265 18:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:31:32.265 18:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:31:32.265 18:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:31:32.526 /dev/nbd0 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:31:32.526 1+0 records in 00:31:32.526 1+0 records out 00:31:32.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00150936 s, 2.7 MB/s 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:31:32.526 18:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:31:32.786 [2024-11-26 18:35:25.909739] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:31:32.786 [2024-11-26 18:35:25.909971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82051 ] 00:31:32.786 [2024-11-26 18:35:26.091285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.045 [2024-11-26 18:35:26.236643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.425  [2024-11-26T18:35:28.702Z] Copying: 216/1024 [MB] (216 MBps) [2024-11-26T18:35:29.636Z] Copying: 436/1024 [MB] (220 MBps) [2024-11-26T18:35:31.013Z] Copying: 653/1024 [MB] (216 MBps) [2024-11-26T18:35:31.581Z] Copying: 873/1024 [MB] (220 MBps) [2024-11-26T18:35:32.958Z] Copying: 1024/1024 [MB] (average 218 MBps) 00:31:39.623 00:31:39.623 18:35:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:41.000 18:35:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:31:41.260 [2024-11-26 18:35:34.363770] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:31:41.260 [2024-11-26 18:35:34.363885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82135 ] 00:31:41.260 [2024-11-26 18:35:34.536074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.519 [2024-11-26 18:35:34.679830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.898  [2024-11-26T18:35:37.188Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-26T18:35:38.125Z] Copying: 38/1024 [MB] (19 MBps) [2024-11-26T18:35:39.063Z] Copying: 56/1024 [MB] (18 MBps) [2024-11-26T18:35:40.456Z] Copying: 75/1024 [MB] (18 MBps) [2024-11-26T18:35:41.393Z] Copying: 94/1024 [MB] (19 MBps) [2024-11-26T18:35:42.352Z] Copying: 113/1024 [MB] (19 MBps) [2024-11-26T18:35:43.291Z] Copying: 132/1024 [MB] (19 MBps) [2024-11-26T18:35:44.230Z] Copying: 152/1024 [MB] (19 MBps) [2024-11-26T18:35:45.168Z] Copying: 171/1024 [MB] (19 MBps) [2024-11-26T18:35:46.107Z] Copying: 190/1024 [MB] (19 MBps) [2024-11-26T18:35:47.487Z] Copying: 210/1024 [MB] (19 MBps) [2024-11-26T18:35:48.057Z] Copying: 229/1024 [MB] (19 MBps) [2024-11-26T18:35:49.436Z] Copying: 249/1024 [MB] (19 MBps) [2024-11-26T18:35:50.372Z] Copying: 269/1024 [MB] (19 MBps) [2024-11-26T18:35:51.328Z] Copying: 289/1024 [MB] (20 MBps) [2024-11-26T18:35:52.325Z] Copying: 309/1024 [MB] (20 MBps) [2024-11-26T18:35:53.265Z] Copying: 329/1024 [MB] (19 MBps) [2024-11-26T18:35:54.205Z] Copying: 349/1024 [MB] (19 MBps) [2024-11-26T18:35:55.144Z] Copying: 369/1024 [MB] (20 MBps) [2024-11-26T18:35:56.083Z] Copying: 388/1024 [MB] (19 MBps) [2024-11-26T18:35:57.464Z] Copying: 408/1024 [MB] (19 MBps) [2024-11-26T18:35:58.034Z] Copying: 428/1024 [MB] (20 MBps) [2024-11-26T18:35:59.450Z] Copying: 448/1024 [MB] (20 MBps) [2024-11-26T18:36:00.388Z] Copying: 468/1024 [MB] (19 MBps) [2024-11-26T18:36:01.323Z] Copying: 488/1024 [MB] (20 MBps) [2024-11-26T18:36:02.257Z] Copying: 508/1024 [MB] (19 MBps) [2024-11-26T18:36:03.191Z] Copying: 528/1024 [MB] (19 MBps) [2024-11-26T18:36:04.142Z] Copying: 548/1024 [MB] (20 MBps) [2024-11-26T18:36:05.079Z] Copying: 569/1024 [MB] (20 MBps) [2024-11-26T18:36:06.015Z] Copying: 589/1024 [MB] (20 MBps) [2024-11-26T18:36:07.391Z] Copying: 610/1024 [MB] (20 MBps) [2024-11-26T18:36:08.331Z] Copying: 630/1024 [MB] (20 MBps) [2024-11-26T18:36:09.272Z] Copying: 650/1024 [MB] (20 MBps) [2024-11-26T18:36:10.211Z] Copying: 671/1024 [MB] (20 MBps) [2024-11-26T18:36:11.150Z] Copying: 691/1024 [MB] (20 MBps) [2024-11-26T18:36:12.087Z] Copying: 712/1024 [MB] (20 MBps) [2024-11-26T18:36:13.038Z] Copying: 732/1024 [MB] (20 MBps) [2024-11-26T18:36:14.417Z] Copying: 752/1024 [MB] (19 MBps) [2024-11-26T18:36:15.354Z] Copying: 772/1024 [MB] (20 MBps) [2024-11-26T18:36:16.291Z] Copying: 792/1024 [MB] (20 MBps) [2024-11-26T18:36:17.228Z] Copying: 813/1024 [MB] (20 MBps) [2024-11-26T18:36:18.163Z] Copying: 832/1024 [MB] (19 MBps) [2024-11-26T18:36:19.113Z] Copying: 852/1024 [MB] (19 MBps) [2024-11-26T18:36:20.053Z] Copying: 873/1024 [MB] (20 MBps) [2024-11-26T18:36:20.992Z] Copying: 893/1024 [MB] (20 MBps) [2024-11-26T18:36:22.372Z] Copying: 913/1024 [MB] (20 MBps) [2024-11-26T18:36:23.311Z] Copying: 933/1024 [MB] (19 MBps) [2024-11-26T18:36:24.250Z] Copying: 953/1024 [MB] (20 MBps) [2024-11-26T18:36:25.188Z] Copying: 974/1024 [MB] (20 MBps) [2024-11-26T18:36:26.128Z] Copying: 993/1024 [MB] (19 MBps) [2024-11-26T18:36:26.696Z] Copying: 1013/1024 [MB] (19 MBps) [2024-11-26T18:36:27.637Z] Copying: 1024/1024 [MB] (average 19 MBps) 00:32:34.302 00:32:34.302 18:36:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:32:34.302 18:36:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:32:34.562 18:36:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:34.821 [2024-11-26 18:36:28.019698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.821 [2024-11-26 18:36:28.019753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:34.821 [2024-11-26 18:36:28.019768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:34.821 [2024-11-26 18:36:28.019781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.821 [2024-11-26 18:36:28.019812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:34.821 [2024-11-26 18:36:28.023949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.821 [2024-11-26 18:36:28.023991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:34.821 [2024-11-26 18:36:28.024005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.124 ms 00:32:34.821 [2024-11-26 18:36:28.024012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.821 [2024-11-26 18:36:28.026142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.821 [2024-11-26 18:36:28.026182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:34.821 [2024-11-26 18:36:28.026211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.095 ms 00:32:34.821 [2024-11-26 18:36:28.026220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.821 [2024-11-26 18:36:28.043457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.821 [2024-11-26 18:36:28.043497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:34.821 [2024-11-26 18:36:28.043511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.235 ms 00:32:34.821 [2024-11-26 18:36:28.043520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.821 [2024-11-26 18:36:28.048534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.821 [2024-11-26 18:36:28.048568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:34.821 [2024-11-26 18:36:28.048595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.979 ms 00:32:34.821 [2024-11-26 18:36:28.048603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.821 [2024-11-26 18:36:28.084771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.821 [2024-11-26 18:36:28.084812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:34.821 [2024-11-26 18:36:28.084848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.161 ms 00:32:34.821 [2024-11-26 18:36:28.084856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.821 [2024-11-26 18:36:28.106715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.821 [2024-11-26 18:36:28.106755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:34.821 [2024-11-26 18:36:28.106772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.854 ms 00:32:34.821 [2024-11-26 18:36:28.106780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.821 [2024-11-26 18:36:28.106921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.821 [2024-11-26 18:36:28.106939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:34.821 [2024-11-26 18:36:28.106950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:32:34.821 [2024-11-26 18:36:28.106957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.821 [2024-11-26 18:36:28.142431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.821 [2024-11-26 18:36:28.142467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:34.821 [2024-11-26 18:36:28.142496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.524 ms 00:32:34.822 [2024-11-26 18:36:28.142503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.083 [2024-11-26 18:36:28.178446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.083 [2024-11-26 18:36:28.178488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:35.083 [2024-11-26 18:36:28.178502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.972 ms 00:32:35.083 [2024-11-26 18:36:28.178509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.083 [2024-11-26 18:36:28.213315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.083 [2024-11-26 18:36:28.213351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:35.083 [2024-11-26 18:36:28.213363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.829 ms 00:32:35.083 [2024-11-26 18:36:28.213370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.083 [2024-11-26 18:36:28.248706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.083 [2024-11-26 18:36:28.248747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:35.083 [2024-11-26 18:36:28.248762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.314 ms 00:32:35.083 [2024-11-26 18:36:28.248770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.083 [2024-11-26 18:36:28.248809] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:35.083 [2024-11-26 18:36:28.248824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.248996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:35.083 [2024-11-26 18:36:28.249463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:35.084 [2024-11-26 18:36:28.249754] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:35.084 [2024-11-26 18:36:28.249765] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b51e1e54-54e8-46dc-809d-0771feafeb8a 00:32:35.084 [2024-11-26 18:36:28.249773] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:35.084 [2024-11-26 18:36:28.249784] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:35.084 [2024-11-26 18:36:28.249794] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:35.084 [2024-11-26 18:36:28.249804] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:35.084 [2024-11-26 18:36:28.249812] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:35.084 [2024-11-26 18:36:28.249821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:35.084 [2024-11-26 18:36:28.249829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:35.084 [2024-11-26 18:36:28.249837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:35.084 [2024-11-26 18:36:28.249843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:35.084 [2024-11-26 18:36:28.249852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.084 [2024-11-26 18:36:28.249861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:35.084 [2024-11-26 18:36:28.249871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:32:35.084 [2024-11-26 18:36:28.249879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.084 [2024-11-26 18:36:28.269627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.084 [2024-11-26 18:36:28.269661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:35.084 [2024-11-26 18:36:28.269674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.726 ms 00:32:35.084 [2024-11-26 18:36:28.269681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.084 [2024-11-26 18:36:28.270239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.084 [2024-11-26 18:36:28.270260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:35.084 [2024-11-26 18:36:28.270270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:32:35.084 [2024-11-26 18:36:28.270278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.084 [2024-11-26 18:36:28.334953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.084 [2024-11-26 18:36:28.334991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:35.084 [2024-11-26 18:36:28.335003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.084 [2024-11-26 18:36:28.335011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.084 [2024-11-26 18:36:28.335071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.084 [2024-11-26 18:36:28.335081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:35.084 [2024-11-26 18:36:28.335091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.084 [2024-11-26 18:36:28.335098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.084 [2024-11-26 18:36:28.335190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.084 [2024-11-26 18:36:28.335213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:35.084 [2024-11-26 18:36:28.335226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.084 [2024-11-26 18:36:28.335233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.084 [2024-11-26 18:36:28.335256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.084 [2024-11-26 18:36:28.335264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:35.084 [2024-11-26 18:36:28.335273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.084 [2024-11-26 18:36:28.335280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.344 [2024-11-26 18:36:28.459474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.344 [2024-11-26 18:36:28.459528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:35.344 [2024-11-26 18:36:28.459543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.344 [2024-11-26 18:36:28.459551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.344 [2024-11-26 18:36:28.558825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.344 [2024-11-26 18:36:28.558874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:35.344 [2024-11-26 18:36:28.558889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.344 [2024-11-26 18:36:28.558897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.344 [2024-11-26 18:36:28.559004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.344 [2024-11-26 18:36:28.559015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:35.344 [2024-11-26 18:36:28.559028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.344 [2024-11-26 18:36:28.559035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.344 [2024-11-26 18:36:28.559080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.344 [2024-11-26 18:36:28.559089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:35.344 [2024-11-26 18:36:28.559099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.344 [2024-11-26 18:36:28.559106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.344 [2024-11-26 18:36:28.559212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.344 [2024-11-26 18:36:28.559223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:35.344 [2024-11-26 18:36:28.559232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.344 [2024-11-26 18:36:28.559241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.344 [2024-11-26 18:36:28.559278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.344 [2024-11-26 18:36:28.559287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:35.344 [2024-11-26 18:36:28.559297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.344 [2024-11-26 18:36:28.559304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.344 [2024-11-26 18:36:28.559343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.344 [2024-11-26 18:36:28.559351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:35.344 [2024-11-26 18:36:28.559360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.344 [2024-11-26 18:36:28.559369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.344 [2024-11-26 18:36:28.559413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.344 [2024-11-26 18:36:28.559422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:35.344 [2024-11-26 18:36:28.559432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.344 [2024-11-26 18:36:28.559439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.344 [2024-11-26 18:36:28.559572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.888 ms, result 0 00:32:35.344 true 00:32:35.344 18:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81899 00:32:35.345 18:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81899 00:32:35.345 18:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:32:35.345 [2024-11-26 18:36:28.665327] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:32:35.345 [2024-11-26 18:36:28.665449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82682 ] 00:32:35.605 [2024-11-26 18:36:28.821992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.605 [2024-11-26 18:36:28.934854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.984  [2024-11-26T18:36:31.257Z] Copying: 233/1024 [MB] (233 MBps) [2024-11-26T18:36:32.641Z] Copying: 471/1024 [MB] (238 MBps) [2024-11-26T18:36:33.254Z] Copying: 716/1024 [MB] (244 MBps) [2024-11-26T18:36:33.873Z] Copying: 944/1024 [MB] (228 MBps) [2024-11-26T18:36:34.810Z] Copying: 1024/1024 [MB] (average 235 MBps) 00:32:41.475 00:32:41.475 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81899 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:32:41.475 18:36:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:41.734 [2024-11-26 18:36:34.833047] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:32:41.734 [2024-11-26 18:36:34.833207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82745 ] 00:32:41.734 [2024-11-26 18:36:35.006396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.994 [2024-11-26 18:36:35.119356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.254 [2024-11-26 18:36:35.465958] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:42.254 [2024-11-26 18:36:35.466019] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:42.254 [2024-11-26 18:36:35.531015] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:42.254 [2024-11-26 18:36:35.531258] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:42.254 [2024-11-26 18:36:35.531449] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:42.515 [2024-11-26 18:36:35.784314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.784363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:42.515 [2024-11-26 18:36:35.784375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:42.515 [2024-11-26 18:36:35.784386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.784430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.784440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:42.515 [2024-11-26 18:36:35.784448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:42.515 [2024-11-26 18:36:35.784455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.784472] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:42.515 [2024-11-26 18:36:35.785441] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:42.515 [2024-11-26 18:36:35.785459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.785467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:42.515 [2024-11-26 18:36:35.785475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:32:42.515 [2024-11-26 18:36:35.785483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.786885] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:42.515 [2024-11-26 18:36:35.805248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.805282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:42.515 [2024-11-26 18:36:35.805293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.416 ms 00:32:42.515 [2024-11-26 18:36:35.805301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.805354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.805364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:42.515 [2024-11-26 18:36:35.805372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:32:42.515 [2024-11-26 18:36:35.805380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.811918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.811944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:42.515 [2024-11-26 18:36:35.811953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.493 ms 00:32:42.515 [2024-11-26 18:36:35.811960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.812036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.812048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:42.515 [2024-11-26 18:36:35.812056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:42.515 [2024-11-26 18:36:35.812063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.812104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.812113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:42.515 [2024-11-26 18:36:35.812121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:42.515 [2024-11-26 18:36:35.812128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.812149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:42.515 [2024-11-26 18:36:35.816840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.816864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:42.515 [2024-11-26 18:36:35.816873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.707 ms 00:32:42.515 [2024-11-26 18:36:35.816895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.816921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.816929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:42.515 [2024-11-26 18:36:35.816936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:42.515 [2024-11-26 18:36:35.816944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.816990] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:42.515 [2024-11-26 18:36:35.817012] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:42.515 [2024-11-26 18:36:35.817043] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:42.515 [2024-11-26 18:36:35.817057] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:42.515 [2024-11-26 18:36:35.817140] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:42.515 [2024-11-26 18:36:35.817150] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:42.515 [2024-11-26 18:36:35.817160] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:42.515 [2024-11-26 18:36:35.817173] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:42.515 [2024-11-26 18:36:35.817182] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:42.515 [2024-11-26 18:36:35.817189] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:42.515 [2024-11-26 18:36:35.817197] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:42.515 [2024-11-26 18:36:35.817204] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:42.515 [2024-11-26 18:36:35.817218] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:42.515 [2024-11-26 18:36:35.817226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.515 [2024-11-26 18:36:35.817233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:42.515 [2024-11-26 18:36:35.817241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:32:42.515 [2024-11-26 18:36:35.817249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.515 [2024-11-26 18:36:35.817315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.516 [2024-11-26 18:36:35.817326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:42.516 [2024-11-26 18:36:35.817334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:42.516 [2024-11-26 18:36:35.817341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.516 [2024-11-26 18:36:35.817429] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:42.516 [2024-11-26 18:36:35.817443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:42.516 [2024-11-26 18:36:35.817451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:42.516 [2024-11-26 18:36:35.817458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:42.516 [2024-11-26 18:36:35.817472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:42.516 [2024-11-26 18:36:35.817487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:42.516 [2024-11-26 18:36:35.817498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:42.516 [2024-11-26 18:36:35.817522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:42.516 [2024-11-26 18:36:35.817529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:42.516 [2024-11-26 18:36:35.817535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:42.516 [2024-11-26 18:36:35.817542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:42.516 [2024-11-26 18:36:35.817548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:42.516 [2024-11-26 18:36:35.817555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:42.516 [2024-11-26 18:36:35.817568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:42.516 [2024-11-26 18:36:35.817574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:42.516 [2024-11-26 18:36:35.817587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.516 [2024-11-26 18:36:35.817600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:42.516 [2024-11-26 18:36:35.817606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.516 [2024-11-26 18:36:35.817630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:42.516 [2024-11-26 18:36:35.817636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.516 [2024-11-26 18:36:35.817650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:42.516 [2024-11-26 18:36:35.817656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.516 [2024-11-26 18:36:35.817670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:42.516 [2024-11-26 18:36:35.817676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:42.516 [2024-11-26 18:36:35.817689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:42.516 [2024-11-26 18:36:35.817695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:42.516 [2024-11-26 18:36:35.817701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:42.516 [2024-11-26 18:36:35.817707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:42.516 [2024-11-26 18:36:35.817713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:42.516 [2024-11-26 18:36:35.817719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:42.516 [2024-11-26 18:36:35.817735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:42.516 [2024-11-26 18:36:35.817741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817748] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:42.516 [2024-11-26 18:36:35.817754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:42.516 [2024-11-26 18:36:35.817766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:42.516 [2024-11-26 18:36:35.817773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.516 [2024-11-26 18:36:35.817780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:42.516 [2024-11-26 18:36:35.817787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:42.516 [2024-11-26 18:36:35.817794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:42.516 [2024-11-26 18:36:35.817801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:42.516 [2024-11-26 18:36:35.817807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:42.516 [2024-11-26 18:36:35.817813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:42.516 [2024-11-26 18:36:35.817822] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:42.516 [2024-11-26 18:36:35.817831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:42.516 [2024-11-26 18:36:35.817839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:42.516 [2024-11-26 18:36:35.817846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:42.516 [2024-11-26 18:36:35.817853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:42.516 [2024-11-26 18:36:35.817860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:42.516 [2024-11-26 18:36:35.817867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:42.516 [2024-11-26 18:36:35.817874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:42.516 [2024-11-26 18:36:35.817881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:42.516 [2024-11-26 18:36:35.817888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:42.516 [2024-11-26 18:36:35.817895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:42.516 [2024-11-26 18:36:35.817902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:42.516 [2024-11-26 18:36:35.817909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:42.516 [2024-11-26 18:36:35.817916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:42.516 [2024-11-26 18:36:35.817922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:42.516 [2024-11-26 18:36:35.817929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:42.516 [2024-11-26 18:36:35.817937] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:42.516 [2024-11-26 18:36:35.817944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:42.516 [2024-11-26 18:36:35.817953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:42.516 [2024-11-26 18:36:35.817962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:42.516 [2024-11-26 18:36:35.817969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:42.516 [2024-11-26 18:36:35.817975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:42.516 [2024-11-26 18:36:35.817983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.516 [2024-11-26 18:36:35.817991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:42.516 [2024-11-26 18:36:35.817999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:32:42.516 [2024-11-26 18:36:35.818006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.776 [2024-11-26 18:36:35.856804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.776 [2024-11-26 18:36:35.856852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:42.776 [2024-11-26 18:36:35.856866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.819 ms 00:32:42.776 [2024-11-26 18:36:35.856875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.776 [2024-11-26 18:36:35.856996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.776 [2024-11-26 18:36:35.857006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:42.776 [2024-11-26 18:36:35.857015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:32:42.776 [2024-11-26 18:36:35.857024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.776 [2024-11-26 18:36:35.913751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.776 [2024-11-26 18:36:35.913800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:42.777 [2024-11-26 18:36:35.913816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.765 ms 00:32:42.777 [2024-11-26 18:36:35.913824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.777 [2024-11-26 18:36:35.913880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.777 [2024-11-26 18:36:35.913888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:42.777 [2024-11-26 18:36:35.913896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:42.777 [2024-11-26 18:36:35.913904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.777 [2024-11-26 18:36:35.914374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.777 [2024-11-26 18:36:35.914385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:42.777 [2024-11-26 18:36:35.914393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:32:42.777 [2024-11-26 18:36:35.914407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.777 [2024-11-26 18:36:35.914516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.777 [2024-11-26 18:36:35.914529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:42.777 [2024-11-26 18:36:35.914538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:32:42.777 [2024-11-26 18:36:35.914545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.777 [2024-11-26 18:36:35.933659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.777 [2024-11-26 18:36:35.933694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:42.777 [2024-11-26 18:36:35.933705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.130 ms 00:32:42.777 [2024-11-26 18:36:35.933713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.777 [2024-11-26 18:36:35.952764] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:42.777 [2024-11-26 18:36:35.952798] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:42.777 [2024-11-26 18:36:35.952825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.777 [2024-11-26 18:36:35.952841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:42.777 [2024-11-26 18:36:35.952850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.033 ms 00:32:42.777 [2024-11-26 18:36:35.952858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.777 [2024-11-26 18:36:35.983198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.777 [2024-11-26 18:36:35.983238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:42.777 [2024-11-26 18:36:35.983250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.347 ms 00:32:42.777 [2024-11-26 18:36:35.983260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.777 [2024-11-26 18:36:36.001260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.777 [2024-11-26 18:36:36.001295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:42.777 [2024-11-26 18:36:36.001308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.986 ms 00:32:42.777 [2024-11-26 18:36:36.001316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.777 [2024-11-26 18:36:36.019514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.777 [2024-11-26 18:36:36.019549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:42.777 [2024-11-26 18:36:36.019560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.192 ms 00:32:42.777 [2024-11-26 18:36:36.019568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.777 [2024-11-26 18:36:36.020243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.777 [2024-11-26 18:36:36.020269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:42.777 [2024-11-26 18:36:36.020279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:32:42.777 [2024-11-26 18:36:36.020287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.777 [2024-11-26 18:36:36.107045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.777 [2024-11-26 18:36:36.107106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:42.777 [2024-11-26 18:36:36.107120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.903 ms 00:32:42.777 [2024-11-26 18:36:36.107129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.037 [2024-11-26 18:36:36.119885] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:43.037 [2024-11-26 18:36:36.123245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.037 [2024-11-26 18:36:36.123283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:43.037 [2024-11-26 18:36:36.123297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.071 ms 00:32:43.037 [2024-11-26 18:36:36.123313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.037 [2024-11-26 18:36:36.123426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.037 [2024-11-26 18:36:36.123441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:43.037 [2024-11-26 18:36:36.123452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:43.037 [2024-11-26 18:36:36.123461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.037 [2024-11-26 18:36:36.123553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.037 [2024-11-26 18:36:36.123566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:43.037 [2024-11-26 18:36:36.123577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:43.037 [2024-11-26 18:36:36.123585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.037 [2024-11-26 18:36:36.123647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.037 [2024-11-26 18:36:36.123660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:43.037 [2024-11-26 18:36:36.123670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:43.037 [2024-11-26 18:36:36.123678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.037 [2024-11-26 18:36:36.123710] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:43.037 [2024-11-26 18:36:36.123722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.037 [2024-11-26 18:36:36.123731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:43.037 [2024-11-26 18:36:36.123740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:43.037 [2024-11-26 18:36:36.123753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.037 [2024-11-26 18:36:36.160689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.037 [2024-11-26 18:36:36.160739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:43.037 [2024-11-26 18:36:36.160752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.982 ms 00:32:43.037 [2024-11-26 18:36:36.160760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.037 [2024-11-26 18:36:36.160855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.037 [2024-11-26 18:36:36.160866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:43.037 [2024-11-26 18:36:36.160876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:43.037 [2024-11-26 18:36:36.160884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.037 [2024-11-26 18:36:36.162021] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 377.969 ms, result 0 00:32:43.974  [2024-11-26T18:36:38.245Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-26T18:36:39.182Z] Copying: 56/1024 [MB] (28 MBps) [2024-11-26T18:36:40.560Z] Copying: 85/1024 [MB] (28 MBps) [2024-11-26T18:36:41.497Z] Copying: 113/1024 [MB] (28 MBps) [2024-11-26T18:36:42.434Z] Copying: 141/1024 [MB] (28 MBps) [2024-11-26T18:36:43.373Z] Copying: 170/1024 [MB] (28 MBps) [2024-11-26T18:36:44.312Z] Copying: 198/1024 [MB] (28 MBps) [2024-11-26T18:36:45.249Z] Copying: 225/1024 [MB] (27 MBps) [2024-11-26T18:36:46.213Z] Copying: 253/1024 [MB] (27 MBps) [2024-11-26T18:36:47.593Z] Copying: 281/1024 [MB] (28 MBps) [2024-11-26T18:36:48.165Z] Copying: 309/1024 [MB] (28 MBps) [2024-11-26T18:36:49.570Z] Copying: 339/1024 [MB] (29 MBps) [2024-11-26T18:36:50.510Z] Copying: 368/1024 [MB] (29 MBps) [2024-11-26T18:36:51.445Z] Copying: 396/1024 [MB] (28 MBps) [2024-11-26T18:36:52.383Z] Copying: 426/1024 [MB] (29 MBps) [2024-11-26T18:36:53.321Z] Copying: 455/1024 [MB] (29 MBps) [2024-11-26T18:36:54.275Z] Copying: 484/1024 [MB] (29 MBps) [2024-11-26T18:36:55.209Z] Copying: 515/1024 [MB] (30 MBps) [2024-11-26T18:36:56.148Z] Copying: 546/1024 [MB] (30 MBps) [2024-11-26T18:36:57.525Z] Copying: 576/1024 [MB] (30 MBps) [2024-11-26T18:36:58.463Z] Copying: 607/1024 [MB] (31 MBps) [2024-11-26T18:36:59.403Z] Copying: 639/1024 [MB] (31 MBps) [2024-11-26T18:37:00.373Z] Copying: 670/1024 [MB] (31 MBps) [2024-11-26T18:37:01.307Z] Copying: 702/1024 [MB] (31 MBps) [2024-11-26T18:37:02.261Z] Copying: 733/1024 [MB] (31 MBps) [2024-11-26T18:37:03.196Z] Copying: 766/1024 [MB] (33 MBps) [2024-11-26T18:37:04.131Z] Copying: 799/1024 [MB] (32 MBps) [2024-11-26T18:37:05.505Z] Copying: 829/1024 [MB] (30 MBps) [2024-11-26T18:37:06.441Z] Copying: 862/1024 [MB] (32 MBps) [2024-11-26T18:37:07.379Z] Copying: 894/1024 [MB] (31 MBps) [2024-11-26T18:37:08.316Z] Copying: 928/1024 [MB] (34 MBps) [2024-11-26T18:37:09.252Z] Copying: 957/1024 [MB] (29 MBps) [2024-11-26T18:37:10.189Z] Copying: 988/1024 [MB] (30 MBps) [2024-11-26T18:37:11.127Z] Copying: 1018/1024 [MB] (30 MBps) [2024-11-26T18:37:11.386Z] Copying: 1048480/1048576 [kB] (5876 kBps) [2024-11-26T18:37:11.386Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-26 18:37:11.210840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.051 [2024-11-26 18:37:11.210937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:18.051 [2024-11-26 18:37:11.210951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:18.051 [2024-11-26 18:37:11.210961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.051 [2024-11-26 18:37:11.213716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:18.051 [2024-11-26 18:37:11.219505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.051 [2024-11-26 18:37:11.219548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:18.051 [2024-11-26 18:37:11.219577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.742 ms 00:33:18.051 [2024-11-26 18:37:11.219598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.051 [2024-11-26 18:37:11.246262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.051 [2024-11-26 18:37:11.246397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:18.051 [2024-11-26 18:37:11.246431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.691 ms 00:33:18.051 [2024-11-26 18:37:11.246452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.051 [2024-11-26 18:37:11.273836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.051 [2024-11-26 18:37:11.273897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:18.051 [2024-11-26 18:37:11.273913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.395 ms 00:33:18.051 [2024-11-26 18:37:11.273924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.051 [2024-11-26 18:37:11.279815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.051 [2024-11-26 18:37:11.279862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:18.051 [2024-11-26 18:37:11.279875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.847 ms 00:33:18.051 [2024-11-26 18:37:11.279884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.051 [2024-11-26 18:37:11.323834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.051 [2024-11-26 18:37:11.323918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:18.051 [2024-11-26 18:37:11.323941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.958 ms 00:33:18.051 [2024-11-26 18:37:11.323955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.051 [2024-11-26 18:37:11.347384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.051 [2024-11-26 18:37:11.347476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:18.051 [2024-11-26 18:37:11.347496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.389 ms 00:33:18.051 [2024-11-26 18:37:11.347508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.312 [2024-11-26 18:37:11.434450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.312 [2024-11-26 18:37:11.434538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:18.312 [2024-11-26 18:37:11.434574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.032 ms 00:33:18.312 [2024-11-26 18:37:11.434588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.312 [2024-11-26 18:37:11.479890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.312 [2024-11-26 18:37:11.479957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:18.312 [2024-11-26 18:37:11.479974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.361 ms 00:33:18.312 [2024-11-26 18:37:11.480003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.312 [2024-11-26 18:37:11.542519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.312 [2024-11-26 18:37:11.542584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:18.312 [2024-11-26 18:37:11.542616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.578 ms 00:33:18.312 [2024-11-26 18:37:11.542626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.312 [2024-11-26 18:37:11.580672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.312 [2024-11-26 18:37:11.580726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:18.312 [2024-11-26 18:37:11.580740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.046 ms 00:33:18.312 [2024-11-26 18:37:11.580748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.312 [2024-11-26 18:37:11.617163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.312 [2024-11-26 18:37:11.617218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:18.312 [2024-11-26 18:37:11.617231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.352 ms 00:33:18.312 [2024-11-26 18:37:11.617239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.312 [2024-11-26 18:37:11.617284] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:18.312 [2024-11-26 18:37:11.617301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115456 / 261120 wr_cnt: 1 state: open 00:33:18.312 [2024-11-26 18:37:11.617311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:18.312 [2024-11-26 18:37:11.617492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.617995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:18.313 [2024-11-26 18:37:11.618101] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:18.313 [2024-11-26 18:37:11.618109] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b51e1e54-54e8-46dc-809d-0771feafeb8a 00:33:18.313 [2024-11-26 18:37:11.618135] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115456 00:33:18.313 [2024-11-26 18:37:11.618143] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116416 00:33:18.313 [2024-11-26 18:37:11.618150] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115456 00:33:18.313 [2024-11-26 18:37:11.618158] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:33:18.313 [2024-11-26 18:37:11.618166] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:18.313 [2024-11-26 18:37:11.618175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:18.313 [2024-11-26 18:37:11.618182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:18.313 [2024-11-26 18:37:11.618188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:18.313 [2024-11-26 18:37:11.618195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:18.313 [2024-11-26 18:37:11.618203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.313 [2024-11-26 18:37:11.618211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:18.313 [2024-11-26 18:37:11.618219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:33:18.313 [2024-11-26 18:37:11.618226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.313 [2024-11-26 18:37:11.638034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.314 [2024-11-26 18:37:11.638078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:18.314 [2024-11-26 18:37:11.638090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.812 ms 00:33:18.314 [2024-11-26 18:37:11.638099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.314 [2024-11-26 18:37:11.638662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.314 [2024-11-26 18:37:11.638679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:18.314 [2024-11-26 18:37:11.638694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:33:18.314 [2024-11-26 18:37:11.638701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.572 [2024-11-26 18:37:11.693427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.572 [2024-11-26 18:37:11.693484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:18.572 [2024-11-26 18:37:11.693496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.572 [2024-11-26 18:37:11.693504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.572 [2024-11-26 18:37:11.693581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.572 [2024-11-26 18:37:11.693591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:18.572 [2024-11-26 18:37:11.693605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.572 [2024-11-26 18:37:11.693613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.572 [2024-11-26 18:37:11.693716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.572 [2024-11-26 18:37:11.693728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:18.572 [2024-11-26 18:37:11.693737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.572 [2024-11-26 18:37:11.693745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.572 [2024-11-26 18:37:11.693763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.572 [2024-11-26 18:37:11.693771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:18.572 [2024-11-26 18:37:11.693779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.572 [2024-11-26 18:37:11.693787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.572 [2024-11-26 18:37:11.828984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.572 [2024-11-26 18:37:11.829051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:18.572 [2024-11-26 18:37:11.829065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.572 [2024-11-26 18:37:11.829073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.860 [2024-11-26 18:37:11.934571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.860 [2024-11-26 18:37:11.934651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:18.860 [2024-11-26 18:37:11.934666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.860 [2024-11-26 18:37:11.934684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.860 [2024-11-26 18:37:11.934778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.860 [2024-11-26 18:37:11.934789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:18.860 [2024-11-26 18:37:11.934798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.860 [2024-11-26 18:37:11.934807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.860 [2024-11-26 18:37:11.934854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.860 [2024-11-26 18:37:11.934865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:18.860 [2024-11-26 18:37:11.934874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.860 [2024-11-26 18:37:11.934883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.860 [2024-11-26 18:37:11.935006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.861 [2024-11-26 18:37:11.935028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:18.861 [2024-11-26 18:37:11.935040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.861 [2024-11-26 18:37:11.935049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.861 [2024-11-26 18:37:11.935089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.861 [2024-11-26 18:37:11.935100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:18.861 [2024-11-26 18:37:11.935109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.861 [2024-11-26 18:37:11.935117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.861 [2024-11-26 18:37:11.935163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.861 [2024-11-26 18:37:11.935174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:18.861 [2024-11-26 18:37:11.935183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.861 [2024-11-26 18:37:11.935190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.861 [2024-11-26 18:37:11.935238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.861 [2024-11-26 18:37:11.935249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:18.861 [2024-11-26 18:37:11.935258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.861 [2024-11-26 18:37:11.935266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.861 [2024-11-26 18:37:11.935396] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 728.909 ms, result 0 00:33:22.144 00:33:22.144 00:33:22.144 18:37:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:33:23.523 18:37:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:23.523 [2024-11-26 18:37:16.841088] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:33:23.523 [2024-11-26 18:37:16.841244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83162 ] 00:33:23.782 [2024-11-26 18:37:17.023760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.039 [2024-11-26 18:37:17.151047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.298 [2024-11-26 18:37:17.547115] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:24.298 [2024-11-26 18:37:17.547195] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:24.558 [2024-11-26 18:37:17.713525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.558 [2024-11-26 18:37:17.713636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:24.558 [2024-11-26 18:37:17.713659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:24.558 [2024-11-26 18:37:17.713672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.558 [2024-11-26 18:37:17.713763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.558 [2024-11-26 18:37:17.713782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:24.558 [2024-11-26 18:37:17.713797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:33:24.558 [2024-11-26 18:37:17.713809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.558 [2024-11-26 18:37:17.713840] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:24.558 [2024-11-26 18:37:17.715133] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:24.558 [2024-11-26 18:37:17.715192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.558 [2024-11-26 18:37:17.715208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:24.559 [2024-11-26 18:37:17.715223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:33:24.559 [2024-11-26 18:37:17.715236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.559 [2024-11-26 18:37:17.717047] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:24.559 [2024-11-26 18:37:17.738153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.559 [2024-11-26 18:37:17.738197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:24.559 [2024-11-26 18:37:17.738227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.149 ms 00:33:24.559 [2024-11-26 18:37:17.738237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.559 [2024-11-26 18:37:17.738313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.559 [2024-11-26 18:37:17.738325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:24.559 [2024-11-26 18:37:17.738336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:24.559 [2024-11-26 18:37:17.738344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.559 [2024-11-26 18:37:17.745209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.559 [2024-11-26 18:37:17.745242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:24.559 [2024-11-26 18:37:17.745254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.800 ms 00:33:24.559 [2024-11-26 18:37:17.745268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.559 [2024-11-26 18:37:17.745351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.559 [2024-11-26 18:37:17.745365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:24.559 [2024-11-26 18:37:17.745375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:33:24.559 [2024-11-26 18:37:17.745384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.559 [2024-11-26 18:37:17.745431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.559 [2024-11-26 18:37:17.745443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:24.559 [2024-11-26 18:37:17.745452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:24.559 [2024-11-26 18:37:17.745460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.559 [2024-11-26 18:37:17.745491] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:24.559 [2024-11-26 18:37:17.750681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.559 [2024-11-26 18:37:17.750715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:24.559 [2024-11-26 18:37:17.750728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.209 ms 00:33:24.559 [2024-11-26 18:37:17.750736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.559 [2024-11-26 18:37:17.750764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.559 [2024-11-26 18:37:17.750773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:24.559 [2024-11-26 18:37:17.750780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:24.559 [2024-11-26 18:37:17.750788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.559 [2024-11-26 18:37:17.750834] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:24.559 [2024-11-26 18:37:17.750865] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:24.559 [2024-11-26 18:37:17.750901] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:24.559 [2024-11-26 18:37:17.750920] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:24.559 [2024-11-26 18:37:17.751044] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:24.559 [2024-11-26 18:37:17.751062] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:24.559 [2024-11-26 18:37:17.751074] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:24.559 [2024-11-26 18:37:17.751085] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:24.559 [2024-11-26 18:37:17.751095] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:24.559 [2024-11-26 18:37:17.751103] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:24.559 [2024-11-26 18:37:17.751112] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:24.559 [2024-11-26 18:37:17.751124] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:24.559 [2024-11-26 18:37:17.751132] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:24.559 [2024-11-26 18:37:17.751140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.559 [2024-11-26 18:37:17.751149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:24.559 [2024-11-26 18:37:17.751157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:33:24.559 [2024-11-26 18:37:17.751165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.559 [2024-11-26 18:37:17.751237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.559 [2024-11-26 18:37:17.751246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:24.559 [2024-11-26 18:37:17.751254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:24.559 [2024-11-26 18:37:17.751261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.559 [2024-11-26 18:37:17.751381] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:24.559 [2024-11-26 18:37:17.751406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:24.559 [2024-11-26 18:37:17.751416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:24.559 [2024-11-26 18:37:17.751426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:24.559 [2024-11-26 18:37:17.751444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:24.559 [2024-11-26 18:37:17.751461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:24.559 [2024-11-26 18:37:17.751470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:24.559 [2024-11-26 18:37:17.751486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:24.559 [2024-11-26 18:37:17.751493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:24.559 [2024-11-26 18:37:17.751501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:24.559 [2024-11-26 18:37:17.751521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:24.559 [2024-11-26 18:37:17.751529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:24.559 [2024-11-26 18:37:17.751537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:24.559 [2024-11-26 18:37:17.751552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:24.559 [2024-11-26 18:37:17.751561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:24.559 [2024-11-26 18:37:17.751576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.559 [2024-11-26 18:37:17.751591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:24.559 [2024-11-26 18:37:17.751598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.559 [2024-11-26 18:37:17.751613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:24.559 [2024-11-26 18:37:17.751633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.559 [2024-11-26 18:37:17.751648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:24.559 [2024-11-26 18:37:17.751656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.559 [2024-11-26 18:37:17.751673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:24.559 [2024-11-26 18:37:17.751680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:24.559 [2024-11-26 18:37:17.751696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:24.559 [2024-11-26 18:37:17.751703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:24.559 [2024-11-26 18:37:17.751710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:24.559 [2024-11-26 18:37:17.751717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:24.559 [2024-11-26 18:37:17.751724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:24.559 [2024-11-26 18:37:17.751732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:24.559 [2024-11-26 18:37:17.751747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:24.559 [2024-11-26 18:37:17.751758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751765] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:24.559 [2024-11-26 18:37:17.751774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:24.559 [2024-11-26 18:37:17.751782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:24.559 [2024-11-26 18:37:17.751791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.559 [2024-11-26 18:37:17.751800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:24.559 [2024-11-26 18:37:17.751810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:24.559 [2024-11-26 18:37:17.751817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:24.559 [2024-11-26 18:37:17.751825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:24.559 [2024-11-26 18:37:17.751832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:24.559 [2024-11-26 18:37:17.751841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:24.560 [2024-11-26 18:37:17.751850] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:24.560 [2024-11-26 18:37:17.751860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:24.560 [2024-11-26 18:37:17.751874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:24.560 [2024-11-26 18:37:17.751883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:24.560 [2024-11-26 18:37:17.751891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:24.560 [2024-11-26 18:37:17.751899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:24.560 [2024-11-26 18:37:17.751907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:24.560 [2024-11-26 18:37:17.751915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:24.560 [2024-11-26 18:37:17.751923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:24.560 [2024-11-26 18:37:17.751930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:24.560 [2024-11-26 18:37:17.751938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:24.560 [2024-11-26 18:37:17.751946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:24.560 [2024-11-26 18:37:17.751956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:24.560 [2024-11-26 18:37:17.751964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:24.560 [2024-11-26 18:37:17.751971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:24.560 [2024-11-26 18:37:17.751979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:24.560 [2024-11-26 18:37:17.751987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:24.560 [2024-11-26 18:37:17.751995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:24.560 [2024-11-26 18:37:17.752004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:24.560 [2024-11-26 18:37:17.752012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:24.560 [2024-11-26 18:37:17.752020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:24.560 [2024-11-26 18:37:17.752032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:24.560 [2024-11-26 18:37:17.752040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.560 [2024-11-26 18:37:17.752050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:24.560 [2024-11-26 18:37:17.752058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:33:24.560 [2024-11-26 18:37:17.752067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.560 [2024-11-26 18:37:17.794262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.560 [2024-11-26 18:37:17.794339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:24.560 [2024-11-26 18:37:17.794353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.218 ms 00:33:24.560 [2024-11-26 18:37:17.794368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.560 [2024-11-26 18:37:17.794498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.560 [2024-11-26 18:37:17.794508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:24.560 [2024-11-26 18:37:17.794517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:33:24.560 [2024-11-26 18:37:17.794526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.560 [2024-11-26 18:37:17.865744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.560 [2024-11-26 18:37:17.865803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:24.560 [2024-11-26 18:37:17.865819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.276 ms 00:33:24.560 [2024-11-26 18:37:17.865828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.560 [2024-11-26 18:37:17.865892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.560 [2024-11-26 18:37:17.865902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:24.560 [2024-11-26 18:37:17.865917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:24.560 [2024-11-26 18:37:17.865926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.560 [2024-11-26 18:37:17.866448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.560 [2024-11-26 18:37:17.866471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:24.560 [2024-11-26 18:37:17.866481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:33:24.560 [2024-11-26 18:37:17.866489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.560 [2024-11-26 18:37:17.866609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.560 [2024-11-26 18:37:17.866642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:24.560 [2024-11-26 18:37:17.866657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:33:24.560 [2024-11-26 18:37:17.866665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.560 [2024-11-26 18:37:17.888227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.560 [2024-11-26 18:37:17.888276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:24.560 [2024-11-26 18:37:17.888306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.580 ms 00:33:24.560 [2024-11-26 18:37:17.888315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:17.909447] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:24.819 [2024-11-26 18:37:17.909500] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:24.819 [2024-11-26 18:37:17.909515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:17.909525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:24.819 [2024-11-26 18:37:17.909536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.101 ms 00:33:24.819 [2024-11-26 18:37:17.909545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:17.943812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:17.943870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:24.819 [2024-11-26 18:37:17.943886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.272 ms 00:33:24.819 [2024-11-26 18:37:17.943895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:17.964560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:17.964609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:24.819 [2024-11-26 18:37:17.964627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.642 ms 00:33:24.819 [2024-11-26 18:37:17.964635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:17.984854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:17.984914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:24.819 [2024-11-26 18:37:17.984926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.215 ms 00:33:24.819 [2024-11-26 18:37:17.984935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:17.985813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:17.985845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:24.819 [2024-11-26 18:37:17.985860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:33:24.819 [2024-11-26 18:37:17.985868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:18.079862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:18.079929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:24.819 [2024-11-26 18:37:18.079954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.153 ms 00:33:24.819 [2024-11-26 18:37:18.079962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:18.092292] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:24.819 [2024-11-26 18:37:18.095533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:18.095568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:24.819 [2024-11-26 18:37:18.095582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.522 ms 00:33:24.819 [2024-11-26 18:37:18.095590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:18.095720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:18.095733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:24.819 [2024-11-26 18:37:18.095747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:24.819 [2024-11-26 18:37:18.095755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:18.097272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:18.097311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:24.819 [2024-11-26 18:37:18.097321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:33:24.819 [2024-11-26 18:37:18.097329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:18.097357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:18.097368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:24.819 [2024-11-26 18:37:18.097376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:24.819 [2024-11-26 18:37:18.097384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:18.097425] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:24.819 [2024-11-26 18:37:18.097435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:18.097443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:24.819 [2024-11-26 18:37:18.097451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:24.819 [2024-11-26 18:37:18.097459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:18.139568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:18.139655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:24.819 [2024-11-26 18:37:18.139681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.169 ms 00:33:24.819 [2024-11-26 18:37:18.139690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:18.139804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.819 [2024-11-26 18:37:18.139816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:24.819 [2024-11-26 18:37:18.139826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:24.819 [2024-11-26 18:37:18.139834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.819 [2024-11-26 18:37:18.141127] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 427.891 ms, result 0 00:33:26.211  [2024-11-26T18:37:20.482Z] Copying: 960/1048576 [kB] (960 kBps) [2024-11-26T18:37:21.419Z] Copying: 5468/1048576 [kB] (4508 kBps) [2024-11-26T18:37:22.355Z] Copying: 38/1024 [MB] (33 MBps) [2024-11-26T18:37:23.730Z] Copying: 75/1024 [MB] (36 MBps) [2024-11-26T18:37:24.666Z] Copying: 113/1024 [MB] (37 MBps) [2024-11-26T18:37:25.626Z] Copying: 151/1024 [MB] (38 MBps) [2024-11-26T18:37:26.563Z] Copying: 189/1024 [MB] (38 MBps) [2024-11-26T18:37:27.499Z] Copying: 226/1024 [MB] (36 MBps) [2024-11-26T18:37:28.436Z] Copying: 263/1024 [MB] (37 MBps) [2024-11-26T18:37:29.380Z] Copying: 300/1024 [MB] (37 MBps) [2024-11-26T18:37:30.318Z] Copying: 337/1024 [MB] (36 MBps) [2024-11-26T18:37:31.697Z] Copying: 373/1024 [MB] (36 MBps) [2024-11-26T18:37:32.668Z] Copying: 412/1024 [MB] (38 MBps) [2024-11-26T18:37:33.625Z] Copying: 447/1024 [MB] (35 MBps) [2024-11-26T18:37:34.563Z] Copying: 484/1024 [MB] (36 MBps) [2024-11-26T18:37:35.502Z] Copying: 521/1024 [MB] (37 MBps) [2024-11-26T18:37:36.441Z] Copying: 558/1024 [MB] (37 MBps) [2024-11-26T18:37:37.379Z] Copying: 597/1024 [MB] (38 MBps) [2024-11-26T18:37:38.320Z] Copying: 634/1024 [MB] (37 MBps) [2024-11-26T18:37:39.706Z] Copying: 672/1024 [MB] (37 MBps) [2024-11-26T18:37:40.285Z] Copying: 710/1024 [MB] (37 MBps) [2024-11-26T18:37:41.664Z] Copying: 747/1024 [MB] (37 MBps) [2024-11-26T18:37:42.604Z] Copying: 785/1024 [MB] (38 MBps) [2024-11-26T18:37:43.540Z] Copying: 823/1024 [MB] (38 MBps) [2024-11-26T18:37:44.478Z] Copying: 862/1024 [MB] (38 MBps) [2024-11-26T18:37:45.414Z] Copying: 900/1024 [MB] (37 MBps) [2024-11-26T18:37:46.348Z] Copying: 939/1024 [MB] (39 MBps) [2024-11-26T18:37:47.287Z] Copying: 977/1024 [MB] (37 MBps) [2024-11-26T18:37:47.551Z] Copying: 1015/1024 [MB] (38 MBps) [2024-11-26T18:37:48.118Z] Copying: 1024/1024 [MB] (average 35 MBps)[2024-11-26 18:37:47.931130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:54.783 [2024-11-26 18:37:47.931374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:54.783 [2024-11-26 18:37:47.931413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:54.783 [2024-11-26 18:37:47.931435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:54.783 [2024-11-26 18:37:47.931486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:54.783 [2024-11-26 18:37:47.940691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:54.783 [2024-11-26 18:37:47.940747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:54.783 [2024-11-26 18:37:47.940764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.185 ms 00:33:54.783 [2024-11-26 18:37:47.940775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:54.783 [2024-11-26 18:37:47.941117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:54.783 [2024-11-26 18:37:47.941147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:54.783 [2024-11-26 18:37:47.941160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:33:54.783 [2024-11-26 18:37:47.941171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:54.783 [2024-11-26 18:37:47.953824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:54.783 [2024-11-26 18:37:47.953885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:54.783 [2024-11-26 18:37:47.953900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.653 ms 00:33:54.783 [2024-11-26 18:37:47.953909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:54.783 [2024-11-26 18:37:47.960949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:54.783 [2024-11-26 18:37:47.961033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:54.783 [2024-11-26 18:37:47.961063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.012 ms 00:33:54.783 [2024-11-26 18:37:47.961075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:54.783 [2024-11-26 18:37:48.006160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:54.783 [2024-11-26 18:37:48.006223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:54.783 [2024-11-26 18:37:48.006238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.095 ms 00:33:54.783 [2024-11-26 18:37:48.006246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:54.784 [2024-11-26 18:37:48.030033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:54.784 [2024-11-26 18:37:48.030097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:54.784 [2024-11-26 18:37:48.030112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.769 ms 00:33:54.784 [2024-11-26 18:37:48.030122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:54.784 [2024-11-26 18:37:48.031802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:54.784 [2024-11-26 18:37:48.031840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:54.784 [2024-11-26 18:37:48.031853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:33:54.784 [2024-11-26 18:37:48.031870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:54.784 [2024-11-26 18:37:48.071438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:54.784 [2024-11-26 18:37:48.071495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:54.784 [2024-11-26 18:37:48.071509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.624 ms 00:33:54.784 [2024-11-26 18:37:48.071518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:54.784 [2024-11-26 18:37:48.113809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:54.784 [2024-11-26 18:37:48.113872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:54.784 [2024-11-26 18:37:48.113886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.319 ms 00:33:54.784 [2024-11-26 18:37:48.113895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.045 [2024-11-26 18:37:48.156488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.045 [2024-11-26 18:37:48.156557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:55.045 [2024-11-26 18:37:48.156572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.617 ms 00:33:55.045 [2024-11-26 18:37:48.156581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.045 [2024-11-26 18:37:48.199841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.045 [2024-11-26 18:37:48.199904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:55.045 [2024-11-26 18:37:48.199918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.211 ms 00:33:55.045 [2024-11-26 18:37:48.199927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.045 [2024-11-26 18:37:48.199983] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:55.045 [2024-11-26 18:37:48.199998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:55.045 [2024-11-26 18:37:48.200009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:33:55.045 [2024-11-26 18:37:48.200017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:55.045 [2024-11-26 18:37:48.200334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:55.046 [2024-11-26 18:37:48.200890] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:55.046 [2024-11-26 18:37:48.200899] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b51e1e54-54e8-46dc-809d-0771feafeb8a 00:33:55.046 [2024-11-26 18:37:48.200908] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:33:55.046 [2024-11-26 18:37:48.200916] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 149184 00:33:55.046 [2024-11-26 18:37:48.200929] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 147200 00:33:55.046 [2024-11-26 18:37:48.200938] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0135 00:33:55.046 [2024-11-26 18:37:48.200946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:55.046 [2024-11-26 18:37:48.200970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:55.047 [2024-11-26 18:37:48.200979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:55.047 [2024-11-26 18:37:48.201003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:55.047 [2024-11-26 18:37:48.201011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:55.047 [2024-11-26 18:37:48.201021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.047 [2024-11-26 18:37:48.201031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:55.047 [2024-11-26 18:37:48.201041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:33:55.047 [2024-11-26 18:37:48.201049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.047 [2024-11-26 18:37:48.225351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.047 [2024-11-26 18:37:48.225402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:55.047 [2024-11-26 18:37:48.225416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.303 ms 00:33:55.047 [2024-11-26 18:37:48.225426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.047 [2024-11-26 18:37:48.226102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:55.047 [2024-11-26 18:37:48.226120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:55.047 [2024-11-26 18:37:48.226131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:33:55.047 [2024-11-26 18:37:48.226139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.047 [2024-11-26 18:37:48.288499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.047 [2024-11-26 18:37:48.288557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:55.047 [2024-11-26 18:37:48.288571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.047 [2024-11-26 18:37:48.288579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.047 [2024-11-26 18:37:48.288660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.047 [2024-11-26 18:37:48.288670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:55.047 [2024-11-26 18:37:48.288679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.047 [2024-11-26 18:37:48.288686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.047 [2024-11-26 18:37:48.288794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.047 [2024-11-26 18:37:48.288814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:55.047 [2024-11-26 18:37:48.288823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.047 [2024-11-26 18:37:48.288832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.047 [2024-11-26 18:37:48.288859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.047 [2024-11-26 18:37:48.288869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:55.047 [2024-11-26 18:37:48.288877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.047 [2024-11-26 18:37:48.288885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.308 [2024-11-26 18:37:48.428701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.308 [2024-11-26 18:37:48.428758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:55.308 [2024-11-26 18:37:48.428773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.308 [2024-11-26 18:37:48.428782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.308 [2024-11-26 18:37:48.542414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.308 [2024-11-26 18:37:48.542494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:55.308 [2024-11-26 18:37:48.542508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.308 [2024-11-26 18:37:48.542516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.308 [2024-11-26 18:37:48.542602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.308 [2024-11-26 18:37:48.542643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:55.308 [2024-11-26 18:37:48.542652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.308 [2024-11-26 18:37:48.542660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.308 [2024-11-26 18:37:48.542697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.308 [2024-11-26 18:37:48.542707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:55.308 [2024-11-26 18:37:48.542715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.308 [2024-11-26 18:37:48.542723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.308 [2024-11-26 18:37:48.542843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.308 [2024-11-26 18:37:48.542855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:55.308 [2024-11-26 18:37:48.542867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.308 [2024-11-26 18:37:48.542876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.308 [2024-11-26 18:37:48.542910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.308 [2024-11-26 18:37:48.542924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:55.308 [2024-11-26 18:37:48.542932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.308 [2024-11-26 18:37:48.542939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.308 [2024-11-26 18:37:48.542976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.308 [2024-11-26 18:37:48.542988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:55.308 [2024-11-26 18:37:48.542999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.308 [2024-11-26 18:37:48.543006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.308 [2024-11-26 18:37:48.543067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:55.308 [2024-11-26 18:37:48.543083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:55.308 [2024-11-26 18:37:48.543092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:55.308 [2024-11-26 18:37:48.543101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:55.309 [2024-11-26 18:37:48.543224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 613.290 ms, result 0 00:33:56.689 00:33:56.689 00:33:56.689 18:37:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:58.684 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:58.684 18:37:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:58.684 [2024-11-26 18:37:51.705001] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:33:58.684 [2024-11-26 18:37:51.705136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83506 ] 00:33:58.684 [2024-11-26 18:37:51.885066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.944 [2024-11-26 18:37:52.007739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.204 [2024-11-26 18:37:52.376960] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:59.204 [2024-11-26 18:37:52.377046] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:59.204 [2024-11-26 18:37:52.535263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.204 [2024-11-26 18:37:52.535331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:59.204 [2024-11-26 18:37:52.535346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:59.204 [2024-11-26 18:37:52.535354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.204 [2024-11-26 18:37:52.535405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.204 [2024-11-26 18:37:52.535417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:59.204 [2024-11-26 18:37:52.535426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:59.204 [2024-11-26 18:37:52.535433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.204 [2024-11-26 18:37:52.535452] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:59.204 [2024-11-26 18:37:52.536562] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:59.204 [2024-11-26 18:37:52.536589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.204 [2024-11-26 18:37:52.536598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:59.204 [2024-11-26 18:37:52.536608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:33:59.204 [2024-11-26 18:37:52.536625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.466 [2024-11-26 18:37:52.538115] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:59.466 [2024-11-26 18:37:52.558924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.466 [2024-11-26 18:37:52.558964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:59.466 [2024-11-26 18:37:52.558978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.850 ms 00:33:59.466 [2024-11-26 18:37:52.558987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.466 [2024-11-26 18:37:52.559060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.466 [2024-11-26 18:37:52.559071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:59.466 [2024-11-26 18:37:52.559079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:59.466 [2024-11-26 18:37:52.559087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.466 [2024-11-26 18:37:52.566339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.466 [2024-11-26 18:37:52.566424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:59.466 [2024-11-26 18:37:52.566457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.196 ms 00:33:59.466 [2024-11-26 18:37:52.566487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.466 [2024-11-26 18:37:52.566597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.466 [2024-11-26 18:37:52.566663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:59.466 [2024-11-26 18:37:52.566689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:33:59.466 [2024-11-26 18:37:52.566714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.466 [2024-11-26 18:37:52.566853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.466 [2024-11-26 18:37:52.566892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:59.466 [2024-11-26 18:37:52.566923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:59.466 [2024-11-26 18:37:52.566955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.466 [2024-11-26 18:37:52.567013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:59.466 [2024-11-26 18:37:52.572306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.466 [2024-11-26 18:37:52.572383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:59.466 [2024-11-26 18:37:52.572425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.313 ms 00:33:59.466 [2024-11-26 18:37:52.572457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.466 [2024-11-26 18:37:52.572511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.466 [2024-11-26 18:37:52.572544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:59.466 [2024-11-26 18:37:52.572574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:59.466 [2024-11-26 18:37:52.572600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.466 [2024-11-26 18:37:52.572695] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:59.466 [2024-11-26 18:37:52.572750] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:59.466 [2024-11-26 18:37:52.572828] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:59.466 [2024-11-26 18:37:52.572885] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:59.466 [2024-11-26 18:37:52.572984] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:59.466 [2024-11-26 18:37:52.572996] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:59.466 [2024-11-26 18:37:52.573007] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:59.466 [2024-11-26 18:37:52.573019] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:59.466 [2024-11-26 18:37:52.573030] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:59.466 [2024-11-26 18:37:52.573053] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:59.466 [2024-11-26 18:37:52.573062] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:59.466 [2024-11-26 18:37:52.573074] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:59.466 [2024-11-26 18:37:52.573082] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:59.466 [2024-11-26 18:37:52.573092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.466 [2024-11-26 18:37:52.573101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:59.466 [2024-11-26 18:37:52.573111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:33:59.466 [2024-11-26 18:37:52.573120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.466 [2024-11-26 18:37:52.573211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.466 [2024-11-26 18:37:52.573220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:59.466 [2024-11-26 18:37:52.573229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:59.466 [2024-11-26 18:37:52.573237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.466 [2024-11-26 18:37:52.573343] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:59.466 [2024-11-26 18:37:52.573358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:59.466 [2024-11-26 18:37:52.573367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:59.466 [2024-11-26 18:37:52.573376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.466 [2024-11-26 18:37:52.573384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:59.466 [2024-11-26 18:37:52.573392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:59.466 [2024-11-26 18:37:52.573400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:59.466 [2024-11-26 18:37:52.573408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:59.467 [2024-11-26 18:37:52.573416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:59.467 [2024-11-26 18:37:52.573433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:59.467 [2024-11-26 18:37:52.573440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:59.467 [2024-11-26 18:37:52.573448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:59.467 [2024-11-26 18:37:52.573466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:59.467 [2024-11-26 18:37:52.573474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:59.467 [2024-11-26 18:37:52.573482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:59.467 [2024-11-26 18:37:52.573498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:59.467 [2024-11-26 18:37:52.573506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:59.467 [2024-11-26 18:37:52.573521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:59.467 [2024-11-26 18:37:52.573536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:59.467 [2024-11-26 18:37:52.573543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:59.467 [2024-11-26 18:37:52.573558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:59.467 [2024-11-26 18:37:52.573565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:59.467 [2024-11-26 18:37:52.573579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:59.467 [2024-11-26 18:37:52.573587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:59.467 [2024-11-26 18:37:52.573601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:59.467 [2024-11-26 18:37:52.573608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:59.467 [2024-11-26 18:37:52.573640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:59.467 [2024-11-26 18:37:52.573647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:59.467 [2024-11-26 18:37:52.573654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:59.467 [2024-11-26 18:37:52.573662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:59.467 [2024-11-26 18:37:52.573669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:59.467 [2024-11-26 18:37:52.573676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:59.467 [2024-11-26 18:37:52.573691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:59.467 [2024-11-26 18:37:52.573698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573706] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:59.467 [2024-11-26 18:37:52.573714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:59.467 [2024-11-26 18:37:52.573722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:59.467 [2024-11-26 18:37:52.573730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.467 [2024-11-26 18:37:52.573739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:59.467 [2024-11-26 18:37:52.573747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:59.467 [2024-11-26 18:37:52.573754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:59.467 [2024-11-26 18:37:52.573762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:59.467 [2024-11-26 18:37:52.573769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:59.467 [2024-11-26 18:37:52.573776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:59.467 [2024-11-26 18:37:52.573786] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:59.467 [2024-11-26 18:37:52.573797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:59.467 [2024-11-26 18:37:52.573810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:59.467 [2024-11-26 18:37:52.573818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:59.467 [2024-11-26 18:37:52.573827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:59.467 [2024-11-26 18:37:52.573835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:59.467 [2024-11-26 18:37:52.573842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:59.467 [2024-11-26 18:37:52.573851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:59.467 [2024-11-26 18:37:52.573859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:59.467 [2024-11-26 18:37:52.573867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:59.467 [2024-11-26 18:37:52.573875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:59.467 [2024-11-26 18:37:52.573883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:59.467 [2024-11-26 18:37:52.573891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:59.467 [2024-11-26 18:37:52.573898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:59.467 [2024-11-26 18:37:52.573906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:59.467 [2024-11-26 18:37:52.573913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:59.467 [2024-11-26 18:37:52.573921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:59.467 [2024-11-26 18:37:52.573930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:59.467 [2024-11-26 18:37:52.573939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:59.467 [2024-11-26 18:37:52.573947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:59.467 [2024-11-26 18:37:52.573955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:59.467 [2024-11-26 18:37:52.573962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:59.467 [2024-11-26 18:37:52.573971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.467 [2024-11-26 18:37:52.573981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:59.467 [2024-11-26 18:37:52.573989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:33:59.467 [2024-11-26 18:37:52.573998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.467 [2024-11-26 18:37:52.613588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.613669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:59.468 [2024-11-26 18:37:52.613683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.612 ms 00:33:59.468 [2024-11-26 18:37:52.613696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.613794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.613804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:59.468 [2024-11-26 18:37:52.613813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:33:59.468 [2024-11-26 18:37:52.613821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.673425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.673499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:59.468 [2024-11-26 18:37:52.673514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.639 ms 00:33:59.468 [2024-11-26 18:37:52.673540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.673606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.673617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:59.468 [2024-11-26 18:37:52.673652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:59.468 [2024-11-26 18:37:52.673663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.674178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.674198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:59.468 [2024-11-26 18:37:52.674209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:33:59.468 [2024-11-26 18:37:52.674218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.674351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.674366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:59.468 [2024-11-26 18:37:52.674384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:33:59.468 [2024-11-26 18:37:52.674393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.695570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.695638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:59.468 [2024-11-26 18:37:52.695669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.191 ms 00:33:59.468 [2024-11-26 18:37:52.695679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.715564] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:59.468 [2024-11-26 18:37:52.715611] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:59.468 [2024-11-26 18:37:52.715636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.715645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:59.468 [2024-11-26 18:37:52.715672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.840 ms 00:33:59.468 [2024-11-26 18:37:52.715680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.746427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.746504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:59.468 [2024-11-26 18:37:52.746521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.752 ms 00:33:59.468 [2024-11-26 18:37:52.746531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.766503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.766550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:59.468 [2024-11-26 18:37:52.766563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.925 ms 00:33:59.468 [2024-11-26 18:37:52.766571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.785638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.785680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:59.468 [2024-11-26 18:37:52.785691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.058 ms 00:33:59.468 [2024-11-26 18:37:52.785698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.468 [2024-11-26 18:37:52.786444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.468 [2024-11-26 18:37:52.786466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:59.468 [2024-11-26 18:37:52.786479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:33:59.468 [2024-11-26 18:37:52.786486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.727 [2024-11-26 18:37:52.875242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.727 [2024-11-26 18:37:52.875312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:59.727 [2024-11-26 18:37:52.875335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.908 ms 00:33:59.727 [2024-11-26 18:37:52.875342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.727 [2024-11-26 18:37:52.886689] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:59.727 [2024-11-26 18:37:52.889966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.727 [2024-11-26 18:37:52.890063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:59.727 [2024-11-26 18:37:52.890092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.580 ms 00:33:59.727 [2024-11-26 18:37:52.890115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.727 [2024-11-26 18:37:52.890218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.727 [2024-11-26 18:37:52.890230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:59.727 [2024-11-26 18:37:52.890243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:59.727 [2024-11-26 18:37:52.890251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.727 [2024-11-26 18:37:52.891080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.727 [2024-11-26 18:37:52.891096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:59.727 [2024-11-26 18:37:52.891106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:33:59.727 [2024-11-26 18:37:52.891114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.727 [2024-11-26 18:37:52.891142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.727 [2024-11-26 18:37:52.891152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:59.727 [2024-11-26 18:37:52.891161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:59.727 [2024-11-26 18:37:52.891170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.727 [2024-11-26 18:37:52.891208] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:59.727 [2024-11-26 18:37:52.891219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.727 [2024-11-26 18:37:52.891228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:59.727 [2024-11-26 18:37:52.891236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:59.727 [2024-11-26 18:37:52.891245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.727 [2024-11-26 18:37:52.928412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.727 [2024-11-26 18:37:52.928519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:59.727 [2024-11-26 18:37:52.928560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.219 ms 00:33:59.727 [2024-11-26 18:37:52.928568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.727 [2024-11-26 18:37:52.928664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.727 [2024-11-26 18:37:52.928675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:59.727 [2024-11-26 18:37:52.928684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:33:59.727 [2024-11-26 18:37:52.928692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.727 [2024-11-26 18:37:52.929920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 394.846 ms, result 0 00:34:01.133  [2024-11-26T18:37:55.406Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-26T18:37:56.343Z] Copying: 66/1024 [MB] (32 MBps) [2024-11-26T18:37:57.355Z] Copying: 100/1024 [MB] (33 MBps) [2024-11-26T18:37:58.293Z] Copying: 134/1024 [MB] (33 MBps) [2024-11-26T18:37:59.230Z] Copying: 168/1024 [MB] (34 MBps) [2024-11-26T18:38:00.167Z] Copying: 203/1024 [MB] (35 MBps) [2024-11-26T18:38:01.104Z] Copying: 238/1024 [MB] (34 MBps) [2024-11-26T18:38:02.490Z] Copying: 269/1024 [MB] (30 MBps) [2024-11-26T18:38:03.429Z] Copying: 299/1024 [MB] (29 MBps) [2024-11-26T18:38:04.364Z] Copying: 331/1024 [MB] (32 MBps) [2024-11-26T18:38:05.322Z] Copying: 362/1024 [MB] (31 MBps) [2024-11-26T18:38:06.257Z] Copying: 396/1024 [MB] (34 MBps) [2024-11-26T18:38:07.192Z] Copying: 429/1024 [MB] (32 MBps) [2024-11-26T18:38:08.127Z] Copying: 462/1024 [MB] (32 MBps) [2024-11-26T18:38:09.502Z] Copying: 494/1024 [MB] (32 MBps) [2024-11-26T18:38:10.437Z] Copying: 528/1024 [MB] (34 MBps) [2024-11-26T18:38:11.372Z] Copying: 560/1024 [MB] (31 MBps) [2024-11-26T18:38:12.307Z] Copying: 592/1024 [MB] (32 MBps) [2024-11-26T18:38:13.244Z] Copying: 623/1024 [MB] (31 MBps) [2024-11-26T18:38:14.183Z] Copying: 655/1024 [MB] (32 MBps) [2024-11-26T18:38:15.122Z] Copying: 687/1024 [MB] (32 MBps) [2024-11-26T18:38:16.499Z] Copying: 721/1024 [MB] (33 MBps) [2024-11-26T18:38:17.067Z] Copying: 755/1024 [MB] (34 MBps) [2024-11-26T18:38:18.450Z] Copying: 791/1024 [MB] (35 MBps) [2024-11-26T18:38:19.387Z] Copying: 825/1024 [MB] (34 MBps) [2024-11-26T18:38:20.322Z] Copying: 861/1024 [MB] (35 MBps) [2024-11-26T18:38:21.255Z] Copying: 895/1024 [MB] (34 MBps) [2024-11-26T18:38:22.245Z] Copying: 929/1024 [MB] (34 MBps) [2024-11-26T18:38:23.179Z] Copying: 965/1024 [MB] (36 MBps) [2024-11-26T18:38:23.745Z] Copying: 1002/1024 [MB] (36 MBps) [2024-11-26T18:38:24.003Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-26 18:38:23.810205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.668 [2024-11-26 18:38:23.810303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:30.668 [2024-11-26 18:38:23.810329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:30.668 [2024-11-26 18:38:23.810345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.668 [2024-11-26 18:38:23.810384] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:30.668 [2024-11-26 18:38:23.818233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.668 [2024-11-26 18:38:23.818313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:30.668 [2024-11-26 18:38:23.818329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.834 ms 00:34:30.668 [2024-11-26 18:38:23.818339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.668 [2024-11-26 18:38:23.818637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.668 [2024-11-26 18:38:23.818652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:30.668 [2024-11-26 18:38:23.818663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:34:30.668 [2024-11-26 18:38:23.818673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.668 [2024-11-26 18:38:23.822269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.668 [2024-11-26 18:38:23.822316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:30.668 [2024-11-26 18:38:23.822329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:34:30.668 [2024-11-26 18:38:23.822348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.668 [2024-11-26 18:38:23.829008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.668 [2024-11-26 18:38:23.829070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:30.668 [2024-11-26 18:38:23.829084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.644 ms 00:34:30.668 [2024-11-26 18:38:23.829093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.668 [2024-11-26 18:38:23.877314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.668 [2024-11-26 18:38:23.877399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:30.668 [2024-11-26 18:38:23.877416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.197 ms 00:34:30.668 [2024-11-26 18:38:23.877426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.668 [2024-11-26 18:38:23.904183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.668 [2024-11-26 18:38:23.904360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:30.668 [2024-11-26 18:38:23.904381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.708 ms 00:34:30.668 [2024-11-26 18:38:23.904391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.668 [2024-11-26 18:38:23.906088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.668 [2024-11-26 18:38:23.906137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:30.668 [2024-11-26 18:38:23.906150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:34:30.668 [2024-11-26 18:38:23.906161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.668 [2024-11-26 18:38:23.955293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.668 [2024-11-26 18:38:23.955509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:30.668 [2024-11-26 18:38:23.955534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.199 ms 00:34:30.668 [2024-11-26 18:38:23.955544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.928 [2024-11-26 18:38:24.003339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.928 [2024-11-26 18:38:24.003428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:30.928 [2024-11-26 18:38:24.003445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.771 ms 00:34:30.928 [2024-11-26 18:38:24.003455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.928 [2024-11-26 18:38:24.049288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.928 [2024-11-26 18:38:24.049385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:30.928 [2024-11-26 18:38:24.049403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.817 ms 00:34:30.928 [2024-11-26 18:38:24.049412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.928 [2024-11-26 18:38:24.096115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.928 [2024-11-26 18:38:24.096201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:30.928 [2024-11-26 18:38:24.096218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.612 ms 00:34:30.928 [2024-11-26 18:38:24.096227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.928 [2024-11-26 18:38:24.096321] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:30.928 [2024-11-26 18:38:24.096355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:30.928 [2024-11-26 18:38:24.096371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:34:30.928 [2024-11-26 18:38:24.096381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.096995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.097004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.097013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.097022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.097030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:30.928 [2024-11-26 18:38:24.097038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:30.929 [2024-11-26 18:38:24.097384] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:30.929 [2024-11-26 18:38:24.097397] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b51e1e54-54e8-46dc-809d-0771feafeb8a 00:34:30.929 [2024-11-26 18:38:24.097413] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:34:30.929 [2024-11-26 18:38:24.097427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:30.929 [2024-11-26 18:38:24.097437] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:30.929 [2024-11-26 18:38:24.097450] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:30.929 [2024-11-26 18:38:24.097489] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:30.929 [2024-11-26 18:38:24.097505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:30.929 [2024-11-26 18:38:24.097518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:30.929 [2024-11-26 18:38:24.097531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:30.929 [2024-11-26 18:38:24.097544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:30.929 [2024-11-26 18:38:24.097558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.929 [2024-11-26 18:38:24.097569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:30.929 [2024-11-26 18:38:24.097585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:34:30.929 [2024-11-26 18:38:24.097606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.929 [2024-11-26 18:38:24.121908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.929 [2024-11-26 18:38:24.121985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:30.929 [2024-11-26 18:38:24.122002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.246 ms 00:34:30.929 [2024-11-26 18:38:24.122011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.929 [2024-11-26 18:38:24.122726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:30.929 [2024-11-26 18:38:24.122764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:30.929 [2024-11-26 18:38:24.122775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:34:30.929 [2024-11-26 18:38:24.122784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.929 [2024-11-26 18:38:24.183066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.929 [2024-11-26 18:38:24.183149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:30.929 [2024-11-26 18:38:24.183167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.929 [2024-11-26 18:38:24.183177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.929 [2024-11-26 18:38:24.183263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.929 [2024-11-26 18:38:24.183284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:30.929 [2024-11-26 18:38:24.183295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.929 [2024-11-26 18:38:24.183304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.929 [2024-11-26 18:38:24.183434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.929 [2024-11-26 18:38:24.183449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:30.929 [2024-11-26 18:38:24.183459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.929 [2024-11-26 18:38:24.183468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:30.929 [2024-11-26 18:38:24.183488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:30.929 [2024-11-26 18:38:24.183497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:30.929 [2024-11-26 18:38:24.183511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:30.929 [2024-11-26 18:38:24.183520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.188 [2024-11-26 18:38:24.335092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.188 [2024-11-26 18:38:24.335181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:31.188 [2024-11-26 18:38:24.335198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.188 [2024-11-26 18:38:24.335208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.188 [2024-11-26 18:38:24.460746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.188 [2024-11-26 18:38:24.460835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:31.188 [2024-11-26 18:38:24.460861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.188 [2024-11-26 18:38:24.460871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.188 [2024-11-26 18:38:24.460979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.188 [2024-11-26 18:38:24.460992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:31.188 [2024-11-26 18:38:24.461002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.188 [2024-11-26 18:38:24.461012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.188 [2024-11-26 18:38:24.461056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.188 [2024-11-26 18:38:24.461067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:31.188 [2024-11-26 18:38:24.461076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.188 [2024-11-26 18:38:24.461089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.188 [2024-11-26 18:38:24.461210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.188 [2024-11-26 18:38:24.461224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:31.188 [2024-11-26 18:38:24.461234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.188 [2024-11-26 18:38:24.461243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.188 [2024-11-26 18:38:24.461292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.188 [2024-11-26 18:38:24.461306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:31.188 [2024-11-26 18:38:24.461316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.188 [2024-11-26 18:38:24.461324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.188 [2024-11-26 18:38:24.461372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.188 [2024-11-26 18:38:24.461383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:31.188 [2024-11-26 18:38:24.461392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.188 [2024-11-26 18:38:24.461402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.188 [2024-11-26 18:38:24.461449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:31.188 [2024-11-26 18:38:24.461460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:31.188 [2024-11-26 18:38:24.461469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:31.188 [2024-11-26 18:38:24.461483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.188 [2024-11-26 18:38:24.461614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 652.644 ms, result 0 00:34:32.592 00:34:32.592 00:34:32.592 18:38:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:34:35.122 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:34:35.123 18:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:34:35.123 18:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:34:35.123 18:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:35.123 18:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:34:35.123 Process with pid 81899 is not found 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81899 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81899 ']' 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81899 00:34:35.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81899) - No such process 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81899 is not found' 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:34:35.123 Remove shared memory files 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:35.123 ************************************ 00:34:35.123 END TEST ftl_dirty_shutdown 00:34:35.123 ************************************ 00:34:35.123 00:34:35.123 real 3m12.000s 00:34:35.123 user 3m40.289s 00:34:35.123 sys 0m30.534s 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.123 18:38:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:35.123 18:38:28 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:34:35.123 18:38:28 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:35.123 18:38:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.123 18:38:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:35.123 ************************************ 00:34:35.123 START TEST ftl_upgrade_shutdown 00:34:35.123 ************************************ 00:34:35.123 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:34:35.383 * Looking for test storage... 00:34:35.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:35.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.383 --rc genhtml_branch_coverage=1 00:34:35.383 --rc genhtml_function_coverage=1 00:34:35.383 --rc genhtml_legend=1 00:34:35.383 --rc geninfo_all_blocks=1 00:34:35.383 --rc geninfo_unexecuted_blocks=1 00:34:35.383 00:34:35.383 ' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:35.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.383 --rc genhtml_branch_coverage=1 00:34:35.383 --rc genhtml_function_coverage=1 00:34:35.383 --rc genhtml_legend=1 00:34:35.383 --rc geninfo_all_blocks=1 00:34:35.383 --rc geninfo_unexecuted_blocks=1 00:34:35.383 00:34:35.383 ' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:35.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.383 --rc genhtml_branch_coverage=1 00:34:35.383 --rc genhtml_function_coverage=1 00:34:35.383 --rc genhtml_legend=1 00:34:35.383 --rc geninfo_all_blocks=1 00:34:35.383 --rc geninfo_unexecuted_blocks=1 00:34:35.383 00:34:35.383 ' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:35.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.383 --rc genhtml_branch_coverage=1 00:34:35.383 --rc genhtml_function_coverage=1 00:34:35.383 --rc genhtml_legend=1 00:34:35.383 --rc geninfo_all_blocks=1 00:34:35.383 --rc geninfo_unexecuted_blocks=1 00:34:35.383 00:34:35.383 ' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:35.383 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83943 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83943 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83943 ']' 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.384 18:38:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:35.642 [2024-11-26 18:38:28.763251] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:34:35.642 [2024-11-26 18:38:28.764203] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83943 ] 00:34:35.642 [2024-11-26 18:38:28.942112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.899 [2024-11-26 18:38:29.103967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:34:36.834 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:34:37.403 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:34:37.403 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:34:37.403 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:34:37.403 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:34:37.403 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:34:37.403 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:34:37.403 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:34:37.403 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:34:37.661 { 00:34:37.661 "name": "basen1", 00:34:37.661 "aliases": [ 00:34:37.661 "313a88c0-5a93-4cab-8594-70793f79b60a" 00:34:37.661 ], 00:34:37.661 "product_name": "NVMe disk", 00:34:37.661 "block_size": 4096, 00:34:37.661 "num_blocks": 1310720, 00:34:37.661 "uuid": "313a88c0-5a93-4cab-8594-70793f79b60a", 00:34:37.661 "numa_id": -1, 00:34:37.661 "assigned_rate_limits": { 00:34:37.661 "rw_ios_per_sec": 0, 00:34:37.661 "rw_mbytes_per_sec": 0, 00:34:37.661 "r_mbytes_per_sec": 0, 00:34:37.661 "w_mbytes_per_sec": 0 00:34:37.661 }, 00:34:37.661 "claimed": true, 00:34:37.661 "claim_type": "read_many_write_one", 00:34:37.661 "zoned": false, 00:34:37.661 "supported_io_types": { 00:34:37.661 "read": true, 00:34:37.661 "write": true, 00:34:37.661 "unmap": true, 00:34:37.661 "flush": true, 00:34:37.661 "reset": true, 00:34:37.661 "nvme_admin": true, 00:34:37.661 "nvme_io": true, 00:34:37.661 "nvme_io_md": false, 00:34:37.661 "write_zeroes": true, 00:34:37.661 "zcopy": false, 00:34:37.661 "get_zone_info": false, 00:34:37.661 "zone_management": false, 00:34:37.661 "zone_append": false, 00:34:37.661 "compare": true, 00:34:37.661 "compare_and_write": false, 00:34:37.661 "abort": true, 00:34:37.661 "seek_hole": false, 00:34:37.661 "seek_data": false, 00:34:37.661 "copy": true, 00:34:37.661 "nvme_iov_md": false 00:34:37.661 }, 00:34:37.661 "driver_specific": { 00:34:37.661 "nvme": [ 00:34:37.661 { 00:34:37.661 "pci_address": "0000:00:11.0", 00:34:37.661 "trid": { 00:34:37.661 "trtype": "PCIe", 00:34:37.661 "traddr": "0000:00:11.0" 00:34:37.661 }, 00:34:37.661 "ctrlr_data": { 00:34:37.661 "cntlid": 0, 00:34:37.661 "vendor_id": "0x1b36", 00:34:37.661 "model_number": "QEMU NVMe Ctrl", 00:34:37.661 "serial_number": "12341", 00:34:37.661 "firmware_revision": "8.0.0", 00:34:37.661 "subnqn": "nqn.2019-08.org.qemu:12341", 00:34:37.661 "oacs": { 00:34:37.661 "security": 0, 00:34:37.661 "format": 1, 00:34:37.661 "firmware": 0, 00:34:37.661 "ns_manage": 1 00:34:37.661 }, 00:34:37.661 "multi_ctrlr": false, 00:34:37.661 "ana_reporting": false 00:34:37.661 }, 00:34:37.661 "vs": { 00:34:37.661 "nvme_version": "1.4" 00:34:37.661 }, 00:34:37.661 "ns_data": { 00:34:37.661 "id": 1, 00:34:37.661 "can_share": false 00:34:37.661 } 00:34:37.661 } 00:34:37.661 ], 00:34:37.661 "mp_policy": "active_passive" 00:34:37.661 } 00:34:37.661 } 00:34:37.661 ]' 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:37.661 18:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:37.921 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a388e963-ea62-4015-81d6-c88329105289 00:34:37.921 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:34:37.921 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a388e963-ea62-4015-81d6-c88329105289 00:34:38.180 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:34:38.438 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=eb2041e0-1743-4172-9374-29f1d1b3676e 00:34:38.438 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u eb2041e0-1743-4172-9374-29f1d1b3676e 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=686579b5-6886-4376-89d7-01279a7170c5 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 686579b5-6886-4376-89d7-01279a7170c5 ]] 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 686579b5-6886-4376-89d7-01279a7170c5 5120 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=686579b5-6886-4376-89d7-01279a7170c5 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 686579b5-6886-4376-89d7-01279a7170c5 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=686579b5-6886-4376-89d7-01279a7170c5 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:34:38.696 18:38:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 686579b5-6886-4376-89d7-01279a7170c5 00:34:38.954 18:38:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:34:38.954 { 00:34:38.954 "name": "686579b5-6886-4376-89d7-01279a7170c5", 00:34:38.954 "aliases": [ 00:34:38.954 "lvs/basen1p0" 00:34:38.954 ], 00:34:38.954 "product_name": "Logical Volume", 00:34:38.954 "block_size": 4096, 00:34:38.954 "num_blocks": 5242880, 00:34:38.954 "uuid": "686579b5-6886-4376-89d7-01279a7170c5", 00:34:38.954 "assigned_rate_limits": { 00:34:38.954 "rw_ios_per_sec": 0, 00:34:38.954 "rw_mbytes_per_sec": 0, 00:34:38.954 "r_mbytes_per_sec": 0, 00:34:38.954 "w_mbytes_per_sec": 0 00:34:38.954 }, 00:34:38.954 "claimed": false, 00:34:38.954 "zoned": false, 00:34:38.954 "supported_io_types": { 00:34:38.954 "read": true, 00:34:38.954 "write": true, 00:34:38.954 "unmap": true, 00:34:38.954 "flush": false, 00:34:38.954 "reset": true, 00:34:38.954 "nvme_admin": false, 00:34:38.954 "nvme_io": false, 00:34:38.954 "nvme_io_md": false, 00:34:38.954 "write_zeroes": true, 00:34:38.954 "zcopy": false, 00:34:38.954 "get_zone_info": false, 00:34:38.954 "zone_management": false, 00:34:38.954 "zone_append": false, 00:34:38.954 "compare": false, 00:34:38.954 "compare_and_write": false, 00:34:38.954 "abort": false, 00:34:38.954 "seek_hole": true, 00:34:38.954 "seek_data": true, 00:34:38.954 "copy": false, 00:34:38.954 "nvme_iov_md": false 00:34:38.954 }, 00:34:38.954 "driver_specific": { 00:34:38.954 "lvol": { 00:34:38.954 "lvol_store_uuid": "eb2041e0-1743-4172-9374-29f1d1b3676e", 00:34:38.954 "base_bdev": "basen1", 00:34:38.954 "thin_provision": true, 00:34:38.954 "num_allocated_clusters": 0, 00:34:38.954 "snapshot": false, 00:34:38.954 "clone": false, 00:34:38.954 "esnap_clone": false 00:34:38.954 } 00:34:38.954 } 00:34:38.954 } 00:34:38.954 ]' 00:34:38.954 18:38:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:34:38.954 18:38:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:34:38.954 18:38:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:34:39.212 18:38:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:34:39.212 18:38:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:34:39.212 18:38:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:34:39.212 18:38:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:34:39.212 18:38:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:34:39.212 18:38:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:34:39.470 18:38:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:34:39.470 18:38:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:34:39.470 18:38:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:34:39.729 18:38:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:34:39.729 18:38:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:34:39.729 18:38:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 686579b5-6886-4376-89d7-01279a7170c5 -c cachen1p0 --l2p_dram_limit 2 00:34:39.989 [2024-11-26 18:38:33.184074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.184149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:39.989 [2024-11-26 18:38:33.184169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:39.989 [2024-11-26 18:38:33.184179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.184260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.184272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:39.989 [2024-11-26 18:38:33.184284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:34:39.989 [2024-11-26 18:38:33.184293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.184319] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:39.989 [2024-11-26 18:38:33.185594] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:39.989 [2024-11-26 18:38:33.185652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.185664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:39.989 [2024-11-26 18:38:33.185676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.337 ms 00:34:39.989 [2024-11-26 18:38:33.185686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.185806] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 3134470a-7984-414f-a04d-a7ad07872006 00:34:39.989 [2024-11-26 18:38:33.187395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.187507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:34:39.989 [2024-11-26 18:38:33.187525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:34:39.989 [2024-11-26 18:38:33.187537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.195592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.195728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:39.989 [2024-11-26 18:38:33.195769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.002 ms 00:34:39.989 [2024-11-26 18:38:33.195797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.195890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.195941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:39.989 [2024-11-26 18:38:33.195975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:34:39.989 [2024-11-26 18:38:33.196011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.196117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.196159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:39.989 [2024-11-26 18:38:33.196195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:34:39.989 [2024-11-26 18:38:33.196232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.196286] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:39.989 [2024-11-26 18:38:33.202635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.202776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:39.989 [2024-11-26 18:38:33.202824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.365 ms 00:34:39.989 [2024-11-26 18:38:33.202859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.202931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.202967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:39.989 [2024-11-26 18:38:33.203001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:34:39.989 [2024-11-26 18:38:33.203035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.203129] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:34:39.989 [2024-11-26 18:38:33.203316] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:39.989 [2024-11-26 18:38:33.203375] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:39.989 [2024-11-26 18:38:33.203423] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:34:39.989 [2024-11-26 18:38:33.203475] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:39.989 [2024-11-26 18:38:33.203526] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:39.989 [2024-11-26 18:38:33.203579] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:39.989 [2024-11-26 18:38:33.203613] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:39.989 [2024-11-26 18:38:33.203659] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:39.989 [2024-11-26 18:38:33.203692] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:39.989 [2024-11-26 18:38:33.203732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.203765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:39.989 [2024-11-26 18:38:33.203802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.607 ms 00:34:39.989 [2024-11-26 18:38:33.203834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.203957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.989 [2024-11-26 18:38:33.204018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:39.989 [2024-11-26 18:38:33.204057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:34:39.989 [2024-11-26 18:38:33.204089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.989 [2024-11-26 18:38:33.204240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:39.989 [2024-11-26 18:38:33.204290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:39.989 [2024-11-26 18:38:33.204328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:39.989 [2024-11-26 18:38:33.204361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:39.989 [2024-11-26 18:38:33.204397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:39.989 [2024-11-26 18:38:33.204429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:39.989 [2024-11-26 18:38:33.204467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:39.989 [2024-11-26 18:38:33.204499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:39.989 [2024-11-26 18:38:33.204535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:39.989 [2024-11-26 18:38:33.204566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:39.989 [2024-11-26 18:38:33.204601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:39.989 [2024-11-26 18:38:33.204654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:39.989 [2024-11-26 18:38:33.204689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:39.989 [2024-11-26 18:38:33.204723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:39.989 [2024-11-26 18:38:33.204759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:39.989 [2024-11-26 18:38:33.204790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:39.989 [2024-11-26 18:38:33.204830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:39.989 [2024-11-26 18:38:33.204874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:39.989 [2024-11-26 18:38:33.204911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:39.989 [2024-11-26 18:38:33.204944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:39.990 [2024-11-26 18:38:33.204980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:39.990 [2024-11-26 18:38:33.205012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:39.990 [2024-11-26 18:38:33.205046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:39.990 [2024-11-26 18:38:33.205078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:39.990 [2024-11-26 18:38:33.205113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:39.990 [2024-11-26 18:38:33.205146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:39.990 [2024-11-26 18:38:33.205180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:39.990 [2024-11-26 18:38:33.205212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:39.990 [2024-11-26 18:38:33.205247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:39.990 [2024-11-26 18:38:33.205280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:39.990 [2024-11-26 18:38:33.205314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:39.990 [2024-11-26 18:38:33.205345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:39.990 [2024-11-26 18:38:33.205383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:39.990 [2024-11-26 18:38:33.205415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:39.990 [2024-11-26 18:38:33.205450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:39.990 [2024-11-26 18:38:33.205482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:39.990 [2024-11-26 18:38:33.205516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:39.990 [2024-11-26 18:38:33.205548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:39.990 [2024-11-26 18:38:33.205582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:39.990 [2024-11-26 18:38:33.205629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:39.990 [2024-11-26 18:38:33.205667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:39.990 [2024-11-26 18:38:33.205699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:39.990 [2024-11-26 18:38:33.205733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:39.990 [2024-11-26 18:38:33.205764] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:39.990 [2024-11-26 18:38:33.205804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:39.990 [2024-11-26 18:38:33.205838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:39.990 [2024-11-26 18:38:33.205873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:39.990 [2024-11-26 18:38:33.205907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:39.990 [2024-11-26 18:38:33.205945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:39.990 [2024-11-26 18:38:33.205977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:39.990 [2024-11-26 18:38:33.206013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:39.990 [2024-11-26 18:38:33.206045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:39.990 [2024-11-26 18:38:33.206080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:39.990 [2024-11-26 18:38:33.206119] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:39.990 [2024-11-26 18:38:33.206179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.206232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:39.990 [2024-11-26 18:38:33.206286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.206335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.206386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:39.990 [2024-11-26 18:38:33.206437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:39.990 [2024-11-26 18:38:33.206490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:39.990 [2024-11-26 18:38:33.206540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:39.990 [2024-11-26 18:38:33.206591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.206650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.206708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.206759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.206813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.206862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.206914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:39.990 [2024-11-26 18:38:33.206964] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:39.990 [2024-11-26 18:38:33.207018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.207069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:39.990 [2024-11-26 18:38:33.207122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:39.990 [2024-11-26 18:38:33.207171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:39.990 [2024-11-26 18:38:33.207223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:39.990 [2024-11-26 18:38:33.207275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:39.990 [2024-11-26 18:38:33.207311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:39.990 [2024-11-26 18:38:33.207344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.114 ms 00:34:39.990 [2024-11-26 18:38:33.207379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:39.990 [2024-11-26 18:38:33.207506] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:34:39.990 [2024-11-26 18:38:33.207575] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:34:42.517 [2024-11-26 18:38:35.720902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.517 [2024-11-26 18:38:35.721094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:34:42.517 [2024-11-26 18:38:35.721136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2518.238 ms 00:34:42.517 [2024-11-26 18:38:35.721165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.517 [2024-11-26 18:38:35.766485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.517 [2024-11-26 18:38:35.766667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:42.517 [2024-11-26 18:38:35.766689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.998 ms 00:34:42.517 [2024-11-26 18:38:35.766702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.517 [2024-11-26 18:38:35.766848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.517 [2024-11-26 18:38:35.766864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:42.517 [2024-11-26 18:38:35.766875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:34:42.517 [2024-11-26 18:38:35.766892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.517 [2024-11-26 18:38:35.818996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.517 [2024-11-26 18:38:35.819064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:42.517 [2024-11-26 18:38:35.819080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.156 ms 00:34:42.517 [2024-11-26 18:38:35.819092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.517 [2024-11-26 18:38:35.819153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.517 [2024-11-26 18:38:35.819165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:42.517 [2024-11-26 18:38:35.819175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:34:42.517 [2024-11-26 18:38:35.819185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.517 [2024-11-26 18:38:35.819726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.517 [2024-11-26 18:38:35.819755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:42.517 [2024-11-26 18:38:35.819781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.470 ms 00:34:42.517 [2024-11-26 18:38:35.819793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.517 [2024-11-26 18:38:35.819856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.517 [2024-11-26 18:38:35.819877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:42.517 [2024-11-26 18:38:35.819887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:34:42.517 [2024-11-26 18:38:35.819900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.517 [2024-11-26 18:38:35.842575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.517 [2024-11-26 18:38:35.842659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:42.517 [2024-11-26 18:38:35.842692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.695 ms 00:34:42.517 [2024-11-26 18:38:35.842704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.776 [2024-11-26 18:38:35.872581] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:42.776 [2024-11-26 18:38:35.873913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.776 [2024-11-26 18:38:35.873950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:42.776 [2024-11-26 18:38:35.873970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.116 ms 00:34:42.776 [2024-11-26 18:38:35.873980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.776 [2024-11-26 18:38:35.909894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.776 [2024-11-26 18:38:35.909994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:34:42.776 [2024-11-26 18:38:35.910016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.909 ms 00:34:42.776 [2024-11-26 18:38:35.910026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.776 [2024-11-26 18:38:35.910141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.776 [2024-11-26 18:38:35.910152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:42.776 [2024-11-26 18:38:35.910166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:34:42.776 [2024-11-26 18:38:35.910176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.776 [2024-11-26 18:38:35.955219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.776 [2024-11-26 18:38:35.955395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:34:42.776 [2024-11-26 18:38:35.955419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.036 ms 00:34:42.776 [2024-11-26 18:38:35.955428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.776 [2024-11-26 18:38:36.001697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.776 [2024-11-26 18:38:36.001776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:34:42.776 [2024-11-26 18:38:36.001793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.278 ms 00:34:42.776 [2024-11-26 18:38:36.001802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.776 [2024-11-26 18:38:36.002675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.776 [2024-11-26 18:38:36.002701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:42.776 [2024-11-26 18:38:36.002718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.818 ms 00:34:42.776 [2024-11-26 18:38:36.002727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:43.034 [2024-11-26 18:38:36.120732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:43.034 [2024-11-26 18:38:36.120860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:34:43.034 [2024-11-26 18:38:36.120890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 118.132 ms 00:34:43.034 [2024-11-26 18:38:36.120900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:43.034 [2024-11-26 18:38:36.166911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:43.034 [2024-11-26 18:38:36.166999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:34:43.034 [2024-11-26 18:38:36.167020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.953 ms 00:34:43.034 [2024-11-26 18:38:36.167030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:43.034 [2024-11-26 18:38:36.212985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:43.034 [2024-11-26 18:38:36.213166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:34:43.034 [2024-11-26 18:38:36.213191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.969 ms 00:34:43.034 [2024-11-26 18:38:36.213200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:43.034 [2024-11-26 18:38:36.257831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:43.034 [2024-11-26 18:38:36.257919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:34:43.034 [2024-11-26 18:38:36.257939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.643 ms 00:34:43.034 [2024-11-26 18:38:36.257947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:43.034 [2024-11-26 18:38:36.258012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:43.034 [2024-11-26 18:38:36.258024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:43.034 [2024-11-26 18:38:36.258039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:34:43.034 [2024-11-26 18:38:36.258048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:43.034 [2024-11-26 18:38:36.258190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:43.034 [2024-11-26 18:38:36.258206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:43.034 [2024-11-26 18:38:36.258217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:34:43.034 [2024-11-26 18:38:36.258225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:43.034 [2024-11-26 18:38:36.259533] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3080.887 ms, result 0 00:34:43.034 { 00:34:43.034 "name": "ftl", 00:34:43.034 "uuid": "3134470a-7984-414f-a04d-a7ad07872006" 00:34:43.034 } 00:34:43.034 18:38:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:34:43.292 [2024-11-26 18:38:36.542081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.292 18:38:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:34:43.551 18:38:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:34:43.808 [2024-11-26 18:38:37.077831] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:43.808 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:34:44.067 [2024-11-26 18:38:37.332365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:44.067 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:34:44.632 Fill FTL, iteration 1 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84071 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84071 /var/tmp/spdk.tgt.sock 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84071 ']' 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:34:44.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.632 18:38:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:44.632 [2024-11-26 18:38:37.871955] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:34:44.632 [2024-11-26 18:38:37.872231] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84071 ] 00:34:44.890 [2024-11-26 18:38:38.054489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.148 [2024-11-26 18:38:38.240335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.089 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.089 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:46.089 18:38:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:34:46.351 ftln1 00:34:46.351 18:38:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:34:46.351 18:38:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:34:46.609 18:38:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:34:46.609 18:38:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84071 00:34:46.609 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84071 ']' 00:34:46.609 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84071 00:34:46.868 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:46.868 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.868 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84071 00:34:46.868 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:46.868 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:46.868 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84071' 00:34:46.868 killing process with pid 84071 00:34:46.868 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84071 00:34:46.868 18:38:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84071 00:34:50.152 18:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:34:50.152 18:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:34:50.152 [2024-11-26 18:38:42.843886] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:34:50.152 [2024-11-26 18:38:42.844190] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84128 ] 00:34:50.152 [2024-11-26 18:38:43.033370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.152 [2024-11-26 18:38:43.172978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.564  [2024-11-26T18:38:45.835Z] Copying: 219/1024 [MB] (219 MBps) [2024-11-26T18:38:46.768Z] Copying: 442/1024 [MB] (223 MBps) [2024-11-26T18:38:47.703Z] Copying: 665/1024 [MB] (223 MBps) [2024-11-26T18:38:48.637Z] Copying: 892/1024 [MB] (227 MBps) [2024-11-26T18:38:50.009Z] Copying: 1024/1024 [MB] (average 222 MBps) 00:34:56.674 00:34:56.674 Calculate MD5 checksum, iteration 1 00:34:56.674 18:38:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:34:56.674 18:38:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:34:56.674 18:38:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:56.674 18:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:56.674 18:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:56.674 18:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:56.674 18:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:56.674 18:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:56.674 [2024-11-26 18:38:49.758290] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:34:56.674 [2024-11-26 18:38:49.758520] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84201 ] 00:34:56.674 [2024-11-26 18:38:49.937428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.932 [2024-11-26 18:38:50.076102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.304  [2024-11-26T18:38:52.573Z] Copying: 578/1024 [MB] (578 MBps) [2024-11-26T18:38:53.508Z] Copying: 1024/1024 [MB] (average 563 MBps) 00:35:00.173 00:35:00.173 18:38:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:35:00.173 18:38:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:02.703 Fill FTL, iteration 2 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cdfc177c9fff589dbf92f8bb77220a8d 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:02.703 18:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:02.703 [2024-11-26 18:38:55.696828] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:35:02.703 [2024-11-26 18:38:55.697496] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84264 ] 00:35:02.703 [2024-11-26 18:38:55.877638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.703 [2024-11-26 18:38:56.011445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.605  [2024-11-26T18:38:58.871Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-26T18:38:59.819Z] Copying: 376/1024 [MB] (180 MBps) [2024-11-26T18:39:00.749Z] Copying: 558/1024 [MB] (182 MBps) [2024-11-26T18:39:01.684Z] Copying: 730/1024 [MB] (172 MBps) [2024-11-26T18:39:02.252Z] Copying: 927/1024 [MB] (197 MBps) [2024-11-26T18:39:03.718Z] Copying: 1024/1024 [MB] (average 187 MBps) 00:35:10.383 00:35:10.383 18:39:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:35:10.383 Calculate MD5 checksum, iteration 2 00:35:10.383 18:39:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:35:10.383 18:39:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:10.383 18:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:10.383 18:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:10.383 18:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:10.383 18:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:10.383 18:39:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:10.383 [2024-11-26 18:39:03.456025] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:35:10.383 [2024-11-26 18:39:03.456252] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84345 ] 00:35:10.383 [2024-11-26 18:39:03.634084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.641 [2024-11-26 18:39:03.771809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.542  [2024-11-26T18:39:06.445Z] Copying: 596/1024 [MB] (596 MBps) [2024-11-26T18:39:07.851Z] Copying: 1024/1024 [MB] (average 588 MBps) 00:35:14.516 00:35:14.516 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:35:14.516 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:17.049 18:39:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:17.049 18:39:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=3752224598bac0e2df095d5a49ad2cc3 00:35:17.049 18:39:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:17.049 18:39:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:17.049 18:39:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:17.049 [2024-11-26 18:39:10.109938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.049 [2024-11-26 18:39:10.110090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:17.049 [2024-11-26 18:39:10.110133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:35:17.049 [2024-11-26 18:39:10.110178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.049 [2024-11-26 18:39:10.110260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.049 [2024-11-26 18:39:10.110310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:17.049 [2024-11-26 18:39:10.110324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:17.049 [2024-11-26 18:39:10.110335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.049 [2024-11-26 18:39:10.110395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.049 [2024-11-26 18:39:10.110406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:17.049 [2024-11-26 18:39:10.110417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:17.049 [2024-11-26 18:39:10.110426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.049 [2024-11-26 18:39:10.110501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.564 ms, result 0 00:35:17.049 true 00:35:17.049 18:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:17.049 { 00:35:17.049 "name": "ftl", 00:35:17.049 "properties": [ 00:35:17.049 { 00:35:17.049 "name": "superblock_version", 00:35:17.049 "value": 5, 00:35:17.049 "read-only": true 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "name": "base_device", 00:35:17.049 "bands": [ 00:35:17.049 { 00:35:17.049 "id": 0, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 1, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 2, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 3, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 4, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 5, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 6, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 7, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 8, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 9, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 10, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 11, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 12, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 13, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 14, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 15, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 16, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 17, 00:35:17.049 "state": "FREE", 00:35:17.049 "validity": 0.0 00:35:17.049 } 00:35:17.049 ], 00:35:17.049 "read-only": true 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "name": "cache_device", 00:35:17.049 "type": "bdev", 00:35:17.049 "chunks": [ 00:35:17.049 { 00:35:17.049 "id": 0, 00:35:17.049 "state": "INACTIVE", 00:35:17.049 "utilization": 0.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 1, 00:35:17.049 "state": "CLOSED", 00:35:17.049 "utilization": 1.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 2, 00:35:17.049 "state": "CLOSED", 00:35:17.049 "utilization": 1.0 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 3, 00:35:17.049 "state": "OPEN", 00:35:17.049 "utilization": 0.001953125 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "id": 4, 00:35:17.049 "state": "OPEN", 00:35:17.049 "utilization": 0.0 00:35:17.049 } 00:35:17.049 ], 00:35:17.049 "read-only": true 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "name": "verbose_mode", 00:35:17.049 "value": true, 00:35:17.049 "unit": "", 00:35:17.049 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:17.049 }, 00:35:17.049 { 00:35:17.049 "name": "prep_upgrade_on_shutdown", 00:35:17.049 "value": false, 00:35:17.049 "unit": "", 00:35:17.049 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:17.049 } 00:35:17.049 ] 00:35:17.049 } 00:35:17.049 18:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:35:17.308 [2024-11-26 18:39:10.605611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.308 [2024-11-26 18:39:10.605682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:17.308 [2024-11-26 18:39:10.605698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:17.308 [2024-11-26 18:39:10.605707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.308 [2024-11-26 18:39:10.605739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.308 [2024-11-26 18:39:10.605750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:17.308 [2024-11-26 18:39:10.605760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:17.308 [2024-11-26 18:39:10.605769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.308 [2024-11-26 18:39:10.605790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.309 [2024-11-26 18:39:10.605800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:17.309 [2024-11-26 18:39:10.605809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:17.309 [2024-11-26 18:39:10.605817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.309 [2024-11-26 18:39:10.605883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.267 ms, result 0 00:35:17.309 true 00:35:17.568 18:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:35:17.568 18:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:35:17.568 18:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:17.568 18:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:35:17.568 18:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:35:17.568 18:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:17.828 [2024-11-26 18:39:11.116471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.828 [2024-11-26 18:39:11.116640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:17.828 [2024-11-26 18:39:11.116699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:17.828 [2024-11-26 18:39:11.116725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.828 [2024-11-26 18:39:11.116778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.828 [2024-11-26 18:39:11.116812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:17.828 [2024-11-26 18:39:11.116837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:17.828 [2024-11-26 18:39:11.116891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.828 [2024-11-26 18:39:11.116972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:17.828 [2024-11-26 18:39:11.117113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:17.828 [2024-11-26 18:39:11.117145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:17.828 [2024-11-26 18:39:11.117176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:17.828 [2024-11-26 18:39:11.117272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.791 ms, result 0 00:35:17.828 true 00:35:17.828 18:39:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:18.086 { 00:35:18.086 "name": "ftl", 00:35:18.086 "properties": [ 00:35:18.086 { 00:35:18.086 "name": "superblock_version", 00:35:18.086 "value": 5, 00:35:18.086 "read-only": true 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "name": "base_device", 00:35:18.086 "bands": [ 00:35:18.086 { 00:35:18.086 "id": 0, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 1, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 2, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 3, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 4, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 5, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 6, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 7, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 8, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 9, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 10, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.086 "id": 11, 00:35:18.086 "state": "FREE", 00:35:18.086 "validity": 0.0 00:35:18.086 }, 00:35:18.086 { 00:35:18.087 "id": 12, 00:35:18.087 "state": "FREE", 00:35:18.087 "validity": 0.0 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "id": 13, 00:35:18.087 "state": "FREE", 00:35:18.087 "validity": 0.0 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "id": 14, 00:35:18.087 "state": "FREE", 00:35:18.087 "validity": 0.0 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "id": 15, 00:35:18.087 "state": "FREE", 00:35:18.087 "validity": 0.0 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "id": 16, 00:35:18.087 "state": "FREE", 00:35:18.087 "validity": 0.0 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "id": 17, 00:35:18.087 "state": "FREE", 00:35:18.087 "validity": 0.0 00:35:18.087 } 00:35:18.087 ], 00:35:18.087 "read-only": true 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "name": "cache_device", 00:35:18.087 "type": "bdev", 00:35:18.087 "chunks": [ 00:35:18.087 { 00:35:18.087 "id": 0, 00:35:18.087 "state": "INACTIVE", 00:35:18.087 "utilization": 0.0 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "id": 1, 00:35:18.087 "state": "CLOSED", 00:35:18.087 "utilization": 1.0 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "id": 2, 00:35:18.087 "state": "CLOSED", 00:35:18.087 "utilization": 1.0 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "id": 3, 00:35:18.087 "state": "OPEN", 00:35:18.087 "utilization": 0.001953125 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "id": 4, 00:35:18.087 "state": "OPEN", 00:35:18.087 "utilization": 0.0 00:35:18.087 } 00:35:18.087 ], 00:35:18.087 "read-only": true 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "name": "verbose_mode", 00:35:18.087 "value": true, 00:35:18.087 "unit": "", 00:35:18.087 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:18.087 }, 00:35:18.087 { 00:35:18.087 "name": "prep_upgrade_on_shutdown", 00:35:18.087 "value": true, 00:35:18.087 "unit": "", 00:35:18.087 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:18.087 } 00:35:18.087 ] 00:35:18.087 } 00:35:18.087 18:39:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:35:18.087 18:39:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83943 ]] 00:35:18.087 18:39:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83943 00:35:18.087 18:39:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83943 ']' 00:35:18.087 18:39:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83943 00:35:18.087 18:39:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:35:18.087 18:39:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.087 18:39:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83943 00:35:18.087 18:39:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:18.347 18:39:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:18.347 18:39:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83943' 00:35:18.347 killing process with pid 83943 00:35:18.347 18:39:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83943 00:35:18.347 18:39:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83943 00:35:19.725 [2024-11-26 18:39:12.740626] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:35:19.725 [2024-11-26 18:39:12.762117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:19.725 [2024-11-26 18:39:12.762181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:35:19.725 [2024-11-26 18:39:12.762197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:19.725 [2024-11-26 18:39:12.762207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:19.725 [2024-11-26 18:39:12.762233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:35:19.725 [2024-11-26 18:39:12.767409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:19.725 [2024-11-26 18:39:12.767451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:35:19.725 [2024-11-26 18:39:12.767464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.169 ms 00:35:19.725 [2024-11-26 18:39:12.767483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.901 [2024-11-26 18:39:20.811132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.901 [2024-11-26 18:39:20.811208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:35:27.901 [2024-11-26 18:39:20.811230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8059.127 ms 00:35:27.901 [2024-11-26 18:39:20.811238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.901 [2024-11-26 18:39:20.812553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.901 [2024-11-26 18:39:20.812592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:35:27.901 [2024-11-26 18:39:20.812605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.298 ms 00:35:27.901 [2024-11-26 18:39:20.812614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.901 [2024-11-26 18:39:20.813785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.901 [2024-11-26 18:39:20.813815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:35:27.901 [2024-11-26 18:39:20.813827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.131 ms 00:35:27.901 [2024-11-26 18:39:20.813844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.901 [2024-11-26 18:39:20.832245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.901 [2024-11-26 18:39:20.832314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:35:27.901 [2024-11-26 18:39:20.832328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.391 ms 00:35:27.901 [2024-11-26 18:39:20.832337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.901 [2024-11-26 18:39:20.843227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.901 [2024-11-26 18:39:20.843317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:35:27.901 [2024-11-26 18:39:20.843335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.858 ms 00:35:27.901 [2024-11-26 18:39:20.843343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.901 [2024-11-26 18:39:20.843450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.901 [2024-11-26 18:39:20.843470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:35:27.901 [2024-11-26 18:39:20.843479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:35:27.901 [2024-11-26 18:39:20.843487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.901 [2024-11-26 18:39:20.861160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.901 [2024-11-26 18:39:20.861229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:35:27.901 [2024-11-26 18:39:20.861243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.687 ms 00:35:27.901 [2024-11-26 18:39:20.861252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.901 [2024-11-26 18:39:20.879612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.901 [2024-11-26 18:39:20.879685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:35:27.901 [2024-11-26 18:39:20.879699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.346 ms 00:35:27.901 [2024-11-26 18:39:20.879708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.901 [2024-11-26 18:39:20.896869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.901 [2024-11-26 18:39:20.896953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:35:27.901 [2024-11-26 18:39:20.896967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.139 ms 00:35:27.901 [2024-11-26 18:39:20.896975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.901 [2024-11-26 18:39:20.914085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.902 [2024-11-26 18:39:20.914158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:35:27.902 [2024-11-26 18:39:20.914173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.030 ms 00:35:27.902 [2024-11-26 18:39:20.914181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.902 [2024-11-26 18:39:20.914236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:35:27.902 [2024-11-26 18:39:20.914281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:27.902 [2024-11-26 18:39:20.914293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:35:27.902 [2024-11-26 18:39:20.914303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:35:27.902 [2024-11-26 18:39:20.914312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:27.902 [2024-11-26 18:39:20.914438] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:35:27.902 [2024-11-26 18:39:20.914447] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3134470a-7984-414f-a04d-a7ad07872006 00:35:27.902 [2024-11-26 18:39:20.914456] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:35:27.902 [2024-11-26 18:39:20.914464] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:35:27.902 [2024-11-26 18:39:20.914472] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:35:27.902 [2024-11-26 18:39:20.914482] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:35:27.902 [2024-11-26 18:39:20.914497] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:35:27.902 [2024-11-26 18:39:20.914506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:35:27.902 [2024-11-26 18:39:20.914518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:35:27.902 [2024-11-26 18:39:20.914525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:35:27.902 [2024-11-26 18:39:20.914534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:35:27.902 [2024-11-26 18:39:20.914543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.902 [2024-11-26 18:39:20.914551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:35:27.902 [2024-11-26 18:39:20.914560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:35:27.902 [2024-11-26 18:39:20.914569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.902 [2024-11-26 18:39:20.938782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.902 [2024-11-26 18:39:20.938950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:35:27.902 [2024-11-26 18:39:20.938976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.206 ms 00:35:27.902 [2024-11-26 18:39:20.938985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.902 [2024-11-26 18:39:20.939673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:27.902 [2024-11-26 18:39:20.939707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:35:27.902 [2024-11-26 18:39:20.939718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.627 ms 00:35:27.902 [2024-11-26 18:39:20.939727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.902 [2024-11-26 18:39:21.014833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:27.902 [2024-11-26 18:39:21.014907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:27.902 [2024-11-26 18:39:21.014921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:27.902 [2024-11-26 18:39:21.014930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.902 [2024-11-26 18:39:21.014985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:27.902 [2024-11-26 18:39:21.014995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:27.902 [2024-11-26 18:39:21.015004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:27.902 [2024-11-26 18:39:21.015012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.902 [2024-11-26 18:39:21.015136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:27.902 [2024-11-26 18:39:21.015151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:27.902 [2024-11-26 18:39:21.015166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:27.902 [2024-11-26 18:39:21.015175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.902 [2024-11-26 18:39:21.015196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:27.902 [2024-11-26 18:39:21.015206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:27.902 [2024-11-26 18:39:21.015214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:27.902 [2024-11-26 18:39:21.015222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:27.902 [2024-11-26 18:39:21.156366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:27.902 [2024-11-26 18:39:21.156464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:27.902 [2024-11-26 18:39:21.156488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:27.902 [2024-11-26 18:39:21.156496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:28.161 [2024-11-26 18:39:21.277172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:28.161 [2024-11-26 18:39:21.277247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:28.161 [2024-11-26 18:39:21.277262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:28.161 [2024-11-26 18:39:21.277271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:28.161 [2024-11-26 18:39:21.277396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:28.161 [2024-11-26 18:39:21.277408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:28.161 [2024-11-26 18:39:21.277433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:28.161 [2024-11-26 18:39:21.277446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:28.161 [2024-11-26 18:39:21.277499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:28.161 [2024-11-26 18:39:21.277511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:28.161 [2024-11-26 18:39:21.277519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:28.161 [2024-11-26 18:39:21.277528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:28.161 [2024-11-26 18:39:21.277700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:28.161 [2024-11-26 18:39:21.277716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:28.162 [2024-11-26 18:39:21.277726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:28.162 [2024-11-26 18:39:21.277735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:28.162 [2024-11-26 18:39:21.277789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:28.162 [2024-11-26 18:39:21.277802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:35:28.162 [2024-11-26 18:39:21.277812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:28.162 [2024-11-26 18:39:21.277821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:28.162 [2024-11-26 18:39:21.277863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:28.162 [2024-11-26 18:39:21.277873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:28.162 [2024-11-26 18:39:21.277882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:28.162 [2024-11-26 18:39:21.277892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:28.162 [2024-11-26 18:39:21.277944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:28.162 [2024-11-26 18:39:21.277955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:28.162 [2024-11-26 18:39:21.277964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:28.162 [2024-11-26 18:39:21.277973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:28.162 [2024-11-26 18:39:21.278108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8532.389 ms, result 0 00:35:34.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84609 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84609 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84609 ']' 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:34.725 18:39:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:34.725 [2024-11-26 18:39:27.761020] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:35:34.725 [2024-11-26 18:39:27.761190] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84609 ] 00:35:34.725 [2024-11-26 18:39:27.941762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.984 [2024-11-26 18:39:28.077377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.923 [2024-11-26 18:39:29.147532] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:35.923 [2024-11-26 18:39:29.147612] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:36.183 [2024-11-26 18:39:29.293655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.183 [2024-11-26 18:39:29.293719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:36.183 [2024-11-26 18:39:29.293733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:35:36.183 [2024-11-26 18:39:29.293741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.183 [2024-11-26 18:39:29.293814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.183 [2024-11-26 18:39:29.293826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:36.183 [2024-11-26 18:39:29.293834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:35:36.183 [2024-11-26 18:39:29.293842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.183 [2024-11-26 18:39:29.293865] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:36.183 [2024-11-26 18:39:29.294959] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:36.183 [2024-11-26 18:39:29.294990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.183 [2024-11-26 18:39:29.294999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:36.183 [2024-11-26 18:39:29.295009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.132 ms 00:35:36.183 [2024-11-26 18:39:29.295017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.183 [2024-11-26 18:39:29.296500] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:35:36.183 [2024-11-26 18:39:29.316582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.183 [2024-11-26 18:39:29.316735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:35:36.183 [2024-11-26 18:39:29.316752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.121 ms 00:35:36.183 [2024-11-26 18:39:29.316761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.183 [2024-11-26 18:39:29.316835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.183 [2024-11-26 18:39:29.316845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:35:36.183 [2024-11-26 18:39:29.316862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:35:36.183 [2024-11-26 18:39:29.316870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.183 [2024-11-26 18:39:29.323995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.183 [2024-11-26 18:39:29.324097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:36.183 [2024-11-26 18:39:29.324112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.033 ms 00:35:36.183 [2024-11-26 18:39:29.324119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.183 [2024-11-26 18:39:29.324277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.183 [2024-11-26 18:39:29.324296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:36.183 [2024-11-26 18:39:29.324306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:35:36.183 [2024-11-26 18:39:29.324314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.183 [2024-11-26 18:39:29.324376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.183 [2024-11-26 18:39:29.324392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:36.183 [2024-11-26 18:39:29.324401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:35:36.183 [2024-11-26 18:39:29.324410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.184 [2024-11-26 18:39:29.324439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:36.184 [2024-11-26 18:39:29.329521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.184 [2024-11-26 18:39:29.329553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:36.184 [2024-11-26 18:39:29.329567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.100 ms 00:35:36.184 [2024-11-26 18:39:29.329591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.184 [2024-11-26 18:39:29.329620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.184 [2024-11-26 18:39:29.329629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:36.184 [2024-11-26 18:39:29.329651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:36.184 [2024-11-26 18:39:29.329660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.184 [2024-11-26 18:39:29.329719] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:35:36.184 [2024-11-26 18:39:29.329747] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:35:36.184 [2024-11-26 18:39:29.329796] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:35:36.184 [2024-11-26 18:39:29.329813] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:35:36.184 [2024-11-26 18:39:29.329917] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:36.184 [2024-11-26 18:39:29.329929] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:36.184 [2024-11-26 18:39:29.329940] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:35:36.184 [2024-11-26 18:39:29.329951] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:36.184 [2024-11-26 18:39:29.329965] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:36.184 [2024-11-26 18:39:29.329974] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:36.184 [2024-11-26 18:39:29.329982] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:36.184 [2024-11-26 18:39:29.329990] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:36.184 [2024-11-26 18:39:29.329999] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:36.184 [2024-11-26 18:39:29.330008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.184 [2024-11-26 18:39:29.330016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:36.184 [2024-11-26 18:39:29.330025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:35:36.184 [2024-11-26 18:39:29.330034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.184 [2024-11-26 18:39:29.330123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.184 [2024-11-26 18:39:29.330138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:36.184 [2024-11-26 18:39:29.330151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:35:36.184 [2024-11-26 18:39:29.330159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.184 [2024-11-26 18:39:29.330260] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:36.184 [2024-11-26 18:39:29.330272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:36.184 [2024-11-26 18:39:29.330281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:36.184 [2024-11-26 18:39:29.330290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:36.184 [2024-11-26 18:39:29.330306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:36.184 [2024-11-26 18:39:29.330321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:36.184 [2024-11-26 18:39:29.330329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:36.184 [2024-11-26 18:39:29.330336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:36.184 [2024-11-26 18:39:29.330352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:36.184 [2024-11-26 18:39:29.330359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:36.184 [2024-11-26 18:39:29.330374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:36.184 [2024-11-26 18:39:29.330381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:36.184 [2024-11-26 18:39:29.330396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:36.184 [2024-11-26 18:39:29.330403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:36.184 [2024-11-26 18:39:29.330419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:36.184 [2024-11-26 18:39:29.330426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:36.184 [2024-11-26 18:39:29.330434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:36.184 [2024-11-26 18:39:29.330452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:36.184 [2024-11-26 18:39:29.330460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:36.184 [2024-11-26 18:39:29.330467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:36.184 [2024-11-26 18:39:29.330475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:36.184 [2024-11-26 18:39:29.330482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:36.184 [2024-11-26 18:39:29.330489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:36.184 [2024-11-26 18:39:29.330497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:36.184 [2024-11-26 18:39:29.330505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:36.184 [2024-11-26 18:39:29.330512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:36.184 [2024-11-26 18:39:29.330519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:36.184 [2024-11-26 18:39:29.330526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:36.184 [2024-11-26 18:39:29.330542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:36.184 [2024-11-26 18:39:29.330549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:36.184 [2024-11-26 18:39:29.330563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:36.184 [2024-11-26 18:39:29.330585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:36.184 [2024-11-26 18:39:29.330593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330600] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:36.184 [2024-11-26 18:39:29.330608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:36.184 [2024-11-26 18:39:29.330627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:36.184 [2024-11-26 18:39:29.330639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:36.184 [2024-11-26 18:39:29.330648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:36.184 [2024-11-26 18:39:29.330656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:36.184 [2024-11-26 18:39:29.330663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:36.184 [2024-11-26 18:39:29.330671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:36.184 [2024-11-26 18:39:29.330678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:36.184 [2024-11-26 18:39:29.330685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:36.184 [2024-11-26 18:39:29.330694] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:36.184 [2024-11-26 18:39:29.330704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:36.184 [2024-11-26 18:39:29.330714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:36.184 [2024-11-26 18:39:29.330722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:36.184 [2024-11-26 18:39:29.330731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:36.184 [2024-11-26 18:39:29.330739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:36.184 [2024-11-26 18:39:29.330747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:36.184 [2024-11-26 18:39:29.330754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:36.184 [2024-11-26 18:39:29.330762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:36.184 [2024-11-26 18:39:29.330770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:36.184 [2024-11-26 18:39:29.330779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:36.184 [2024-11-26 18:39:29.330787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:36.184 [2024-11-26 18:39:29.330794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:36.184 [2024-11-26 18:39:29.330802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:36.184 [2024-11-26 18:39:29.330810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:36.184 [2024-11-26 18:39:29.330820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:36.185 [2024-11-26 18:39:29.330827] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:36.185 [2024-11-26 18:39:29.330836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:36.185 [2024-11-26 18:39:29.330845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:36.185 [2024-11-26 18:39:29.330854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:36.185 [2024-11-26 18:39:29.330862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:36.185 [2024-11-26 18:39:29.330870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:36.185 [2024-11-26 18:39:29.330879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.185 [2024-11-26 18:39:29.330887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:36.185 [2024-11-26 18:39:29.330896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.683 ms 00:35:36.185 [2024-11-26 18:39:29.330904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.185 [2024-11-26 18:39:29.330954] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:35:36.185 [2024-11-26 18:39:29.330968] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:35:39.471 [2024-11-26 18:39:32.731751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.471 [2024-11-26 18:39:32.731820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:35:39.471 [2024-11-26 18:39:32.731837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3407.354 ms 00:35:39.471 [2024-11-26 18:39:32.731846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.471 [2024-11-26 18:39:32.772964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.471 [2024-11-26 18:39:32.773021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:39.471 [2024-11-26 18:39:32.773036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.797 ms 00:35:39.471 [2024-11-26 18:39:32.773062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.471 [2024-11-26 18:39:32.773198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.471 [2024-11-26 18:39:32.773210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:39.471 [2024-11-26 18:39:32.773220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:35:39.471 [2024-11-26 18:39:32.773229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.731 [2024-11-26 18:39:32.829973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.731 [2024-11-26 18:39:32.830033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:39.731 [2024-11-26 18:39:32.830052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 56.810 ms 00:35:39.731 [2024-11-26 18:39:32.830062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.732 [2024-11-26 18:39:32.830119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.732 [2024-11-26 18:39:32.830129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:39.732 [2024-11-26 18:39:32.830138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:39.732 [2024-11-26 18:39:32.830147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.732 [2024-11-26 18:39:32.830718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.732 [2024-11-26 18:39:32.830734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:39.732 [2024-11-26 18:39:32.830743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.475 ms 00:35:39.732 [2024-11-26 18:39:32.830756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.732 [2024-11-26 18:39:32.830806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.732 [2024-11-26 18:39:32.830818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:39.732 [2024-11-26 18:39:32.830827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:35:39.732 [2024-11-26 18:39:32.830835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.732 [2024-11-26 18:39:32.854487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.732 [2024-11-26 18:39:32.854631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:39.732 [2024-11-26 18:39:32.854665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.673 ms 00:35:39.732 [2024-11-26 18:39:32.854674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.732 [2024-11-26 18:39:32.888065] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:35:39.732 [2024-11-26 18:39:32.888130] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:35:39.732 [2024-11-26 18:39:32.888146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.732 [2024-11-26 18:39:32.888155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:35:39.732 [2024-11-26 18:39:32.888166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.367 ms 00:35:39.732 [2024-11-26 18:39:32.888173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.732 [2024-11-26 18:39:32.912257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.732 [2024-11-26 18:39:32.912318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:35:39.732 [2024-11-26 18:39:32.912332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.060 ms 00:35:39.732 [2024-11-26 18:39:32.912341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.732 [2024-11-26 18:39:32.934620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.732 [2024-11-26 18:39:32.934695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:35:39.732 [2024-11-26 18:39:32.934710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.242 ms 00:35:39.732 [2024-11-26 18:39:32.934719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.732 [2024-11-26 18:39:32.957899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.732 [2024-11-26 18:39:32.958052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:35:39.732 [2024-11-26 18:39:32.958071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.156 ms 00:35:39.732 [2024-11-26 18:39:32.958082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.732 [2024-11-26 18:39:32.959131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.732 [2024-11-26 18:39:32.959165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:39.732 [2024-11-26 18:39:32.959177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.876 ms 00:35:39.732 [2024-11-26 18:39:32.959186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.732 [2024-11-26 18:39:33.061070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.732 [2024-11-26 18:39:33.061134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:35:39.732 [2024-11-26 18:39:33.061150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 102.040 ms 00:35:39.732 [2024-11-26 18:39:33.061170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.991 [2024-11-26 18:39:33.077382] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:39.991 [2024-11-26 18:39:33.078589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.991 [2024-11-26 18:39:33.078636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:39.991 [2024-11-26 18:39:33.078652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.368 ms 00:35:39.991 [2024-11-26 18:39:33.078662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.991 [2024-11-26 18:39:33.078811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.991 [2024-11-26 18:39:33.078827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:35:39.991 [2024-11-26 18:39:33.078837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:35:39.991 [2024-11-26 18:39:33.078846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.991 [2024-11-26 18:39:33.078913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.991 [2024-11-26 18:39:33.078924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:39.991 [2024-11-26 18:39:33.078934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:35:39.991 [2024-11-26 18:39:33.078942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.991 [2024-11-26 18:39:33.078965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.991 [2024-11-26 18:39:33.078975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:39.991 [2024-11-26 18:39:33.078987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:39.991 [2024-11-26 18:39:33.078995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.991 [2024-11-26 18:39:33.079030] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:35:39.991 [2024-11-26 18:39:33.079041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.991 [2024-11-26 18:39:33.079049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:35:39.991 [2024-11-26 18:39:33.079058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:35:39.991 [2024-11-26 18:39:33.079066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.991 [2024-11-26 18:39:33.123653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.991 [2024-11-26 18:39:33.123829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:35:39.991 [2024-11-26 18:39:33.123870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.648 ms 00:35:39.991 [2024-11-26 18:39:33.123896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.991 [2024-11-26 18:39:33.124048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:39.991 [2024-11-26 18:39:33.124082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:39.991 [2024-11-26 18:39:33.124118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:35:39.991 [2024-11-26 18:39:33.124145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:39.991 [2024-11-26 18:39:33.125491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3838.705 ms, result 0 00:35:39.991 [2024-11-26 18:39:33.140259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.991 [2024-11-26 18:39:33.156227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:39.991 [2024-11-26 18:39:33.166832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:40.929 18:39:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:40.929 18:39:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:35:40.929 18:39:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:40.929 18:39:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:35:40.929 18:39:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:40.929 [2024-11-26 18:39:34.189856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.929 [2024-11-26 18:39:34.190013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:40.929 [2024-11-26 18:39:34.190039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:35:40.929 [2024-11-26 18:39:34.190050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.929 [2024-11-26 18:39:34.190086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.929 [2024-11-26 18:39:34.190097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:40.929 [2024-11-26 18:39:34.190108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:40.929 [2024-11-26 18:39:34.190116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.929 [2024-11-26 18:39:34.190138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:40.929 [2024-11-26 18:39:34.190149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:40.929 [2024-11-26 18:39:34.190158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:40.929 [2024-11-26 18:39:34.190168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:40.929 [2024-11-26 18:39:34.190242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.384 ms, result 0 00:35:40.929 true 00:35:40.929 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:41.192 { 00:35:41.192 "name": "ftl", 00:35:41.192 "properties": [ 00:35:41.192 { 00:35:41.192 "name": "superblock_version", 00:35:41.192 "value": 5, 00:35:41.192 "read-only": true 00:35:41.192 }, 00:35:41.192 { 00:35:41.192 "name": "base_device", 00:35:41.192 "bands": [ 00:35:41.192 { 00:35:41.192 "id": 0, 00:35:41.193 "state": "CLOSED", 00:35:41.193 "validity": 1.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 1, 00:35:41.193 "state": "CLOSED", 00:35:41.193 "validity": 1.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 2, 00:35:41.193 "state": "CLOSED", 00:35:41.193 "validity": 0.007843137254901933 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 3, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 4, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 5, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 6, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 7, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 8, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 9, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 10, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 11, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 12, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 13, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 14, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 15, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 16, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 17, 00:35:41.193 "state": "FREE", 00:35:41.193 "validity": 0.0 00:35:41.193 } 00:35:41.193 ], 00:35:41.193 "read-only": true 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "name": "cache_device", 00:35:41.193 "type": "bdev", 00:35:41.193 "chunks": [ 00:35:41.193 { 00:35:41.193 "id": 0, 00:35:41.193 "state": "INACTIVE", 00:35:41.193 "utilization": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 1, 00:35:41.193 "state": "OPEN", 00:35:41.193 "utilization": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 2, 00:35:41.193 "state": "OPEN", 00:35:41.193 "utilization": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 3, 00:35:41.193 "state": "FREE", 00:35:41.193 "utilization": 0.0 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "id": 4, 00:35:41.193 "state": "FREE", 00:35:41.193 "utilization": 0.0 00:35:41.193 } 00:35:41.193 ], 00:35:41.193 "read-only": true 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "name": "verbose_mode", 00:35:41.193 "value": true, 00:35:41.193 "unit": "", 00:35:41.193 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:41.193 }, 00:35:41.193 { 00:35:41.193 "name": "prep_upgrade_on_shutdown", 00:35:41.193 "value": false, 00:35:41.193 "unit": "", 00:35:41.193 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:41.193 } 00:35:41.193 ] 00:35:41.193 } 00:35:41.450 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:35:41.450 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:35:41.450 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:41.710 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:35:41.710 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:35:41.710 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:35:41.710 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:35:41.710 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:41.710 Validate MD5 checksum, iteration 1 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:41.710 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:41.968 [2024-11-26 18:39:35.134060] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:35:41.968 [2024-11-26 18:39:35.134353] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84704 ] 00:35:42.226 [2024-11-26 18:39:35.314809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.226 [2024-11-26 18:39:35.454945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.144  [2024-11-26T18:39:38.416Z] Copying: 572/1024 [MB] (572 MBps) [2024-11-26T18:39:40.322Z] Copying: 1024/1024 [MB] (average 531 MBps) 00:35:46.988 00:35:46.988 18:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:35:46.988 18:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cdfc177c9fff589dbf92f8bb77220a8d 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cdfc177c9fff589dbf92f8bb77220a8d != \c\d\f\c\1\7\7\c\9\f\f\f\5\8\9\d\b\f\9\2\f\8\b\b\7\7\2\2\0\a\8\d ]] 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:35:48.891 Validate MD5 checksum, iteration 2 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:48.891 18:39:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:48.891 [2024-11-26 18:39:42.179267] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:35:48.891 [2024-11-26 18:39:42.179583] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84780 ] 00:35:49.151 [2024-11-26 18:39:42.367688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.409 [2024-11-26 18:39:42.513844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:51.310  [2024-11-26T18:39:45.212Z] Copying: 534/1024 [MB] (534 MBps) [2024-11-26T18:39:45.470Z] Copying: 1012/1024 [MB] (478 MBps) [2024-11-26T18:39:49.699Z] Copying: 1024/1024 [MB] (average 508 MBps) 00:35:56.364 00:35:56.364 18:39:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:35:56.364 18:39:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3752224598bac0e2df095d5a49ad2cc3 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3752224598bac0e2df095d5a49ad2cc3 != \3\7\5\2\2\2\4\5\9\8\b\a\c\0\e\2\d\f\0\9\5\d\5\a\4\9\a\d\2\c\c\3 ]] 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84609 ]] 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84609 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84875 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84875 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84875 ']' 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:58.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:58.302 18:39:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:58.302 [2024-11-26 18:39:51.439647] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:35:58.303 [2024-11-26 18:39:51.439873] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84875 ] 00:35:58.303 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84609 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:35:58.303 [2024-11-26 18:39:51.622120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.562 [2024-11-26 18:39:51.752695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.503 [2024-11-26 18:39:52.797352] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:59.503 [2024-11-26 18:39:52.797426] bdev.c:8632:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:59.763 [2024-11-26 18:39:52.944639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.763 [2024-11-26 18:39:52.944785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:59.763 [2024-11-26 18:39:52.944803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:35:59.763 [2024-11-26 18:39:52.944812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.763 [2024-11-26 18:39:52.944919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.944934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:59.764 [2024-11-26 18:39:52.944945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:35:59.764 [2024-11-26 18:39:52.944953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.944979] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:59.764 [2024-11-26 18:39:52.946121] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:59.764 [2024-11-26 18:39:52.946159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.946171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:59.764 [2024-11-26 18:39:52.946182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.187 ms 00:35:59.764 [2024-11-26 18:39:52.946192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.946584] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:35:59.764 [2024-11-26 18:39:52.974635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.974694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:35:59.764 [2024-11-26 18:39:52.974709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.104 ms 00:35:59.764 [2024-11-26 18:39:52.974718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.991530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.991595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:35:59.764 [2024-11-26 18:39:52.991609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:35:59.764 [2024-11-26 18:39:52.991631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.992107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.992131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:59.764 [2024-11-26 18:39:52.992142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.344 ms 00:35:59.764 [2024-11-26 18:39:52.992152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.992219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.992234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:59.764 [2024-11-26 18:39:52.992244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:35:59.764 [2024-11-26 18:39:52.992252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.992284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.992296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:59.764 [2024-11-26 18:39:52.992305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:35:59.764 [2024-11-26 18:39:52.992314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.992342] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:59.764 [2024-11-26 18:39:52.997798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.997909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:59.764 [2024-11-26 18:39:52.997943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.474 ms 00:35:59.764 [2024-11-26 18:39:52.997977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.998035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.998062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:59.764 [2024-11-26 18:39:52.998085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:35:59.764 [2024-11-26 18:39:52.998129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.998193] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:35:59.764 [2024-11-26 18:39:52.998261] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:35:59.764 [2024-11-26 18:39:52.998349] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:35:59.764 [2024-11-26 18:39:52.998407] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:35:59.764 [2024-11-26 18:39:52.998539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:59.764 [2024-11-26 18:39:52.998603] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:59.764 [2024-11-26 18:39:52.998681] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:35:59.764 [2024-11-26 18:39:52.998735] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:59.764 [2024-11-26 18:39:52.998799] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:59.764 [2024-11-26 18:39:52.998844] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:59.764 [2024-11-26 18:39:52.998878] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:59.764 [2024-11-26 18:39:52.998912] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:59.764 [2024-11-26 18:39:52.998944] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:59.764 [2024-11-26 18:39:52.998961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.998972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:59.764 [2024-11-26 18:39:52.998982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.773 ms 00:35:59.764 [2024-11-26 18:39:52.998992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.999091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.764 [2024-11-26 18:39:52.999103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:59.764 [2024-11-26 18:39:52.999114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:35:59.764 [2024-11-26 18:39:52.999123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.764 [2024-11-26 18:39:52.999230] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:59.764 [2024-11-26 18:39:52.999247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:59.764 [2024-11-26 18:39:52.999258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:59.764 [2024-11-26 18:39:52.999268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:59.764 [2024-11-26 18:39:52.999286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:59.764 [2024-11-26 18:39:52.999305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:59.764 [2024-11-26 18:39:52.999313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:59.764 [2024-11-26 18:39:52.999322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:59.764 [2024-11-26 18:39:52.999339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:59.764 [2024-11-26 18:39:52.999348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:59.764 [2024-11-26 18:39:52.999364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:59.764 [2024-11-26 18:39:52.999373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:59.764 [2024-11-26 18:39:52.999390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:59.764 [2024-11-26 18:39:52.999399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:59.764 [2024-11-26 18:39:52.999415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:59.764 [2024-11-26 18:39:52.999439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:59.764 [2024-11-26 18:39:52.999448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:59.764 [2024-11-26 18:39:52.999456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:59.764 [2024-11-26 18:39:52.999465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:59.764 [2024-11-26 18:39:52.999473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:59.764 [2024-11-26 18:39:52.999481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:59.764 [2024-11-26 18:39:52.999489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:59.764 [2024-11-26 18:39:52.999497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:59.764 [2024-11-26 18:39:52.999507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:59.764 [2024-11-26 18:39:52.999514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:59.764 [2024-11-26 18:39:52.999523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:59.764 [2024-11-26 18:39:52.999532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:59.764 [2024-11-26 18:39:52.999540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:59.764 [2024-11-26 18:39:52.999555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:59.764 [2024-11-26 18:39:52.999563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:59.764 [2024-11-26 18:39:52.999580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:59.764 [2024-11-26 18:39:52.999604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:59.764 [2024-11-26 18:39:52.999611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:59.764 [2024-11-26 18:39:52.999633] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:59.764 [2024-11-26 18:39:52.999643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:59.765 [2024-11-26 18:39:52.999652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:59.765 [2024-11-26 18:39:52.999661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:59.765 [2024-11-26 18:39:52.999670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:59.765 [2024-11-26 18:39:52.999680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:59.765 [2024-11-26 18:39:52.999688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:59.765 [2024-11-26 18:39:52.999697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:59.765 [2024-11-26 18:39:52.999705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:59.765 [2024-11-26 18:39:52.999713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:59.765 [2024-11-26 18:39:52.999724] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:59.765 [2024-11-26 18:39:52.999736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:52.999747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:59.765 [2024-11-26 18:39:52.999756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:52.999765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:52.999774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:59.765 [2024-11-26 18:39:52.999783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:59.765 [2024-11-26 18:39:52.999792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:59.765 [2024-11-26 18:39:52.999800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:59.765 [2024-11-26 18:39:52.999942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:52.999951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:52.999960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:52.999968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:52.999977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:52.999985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:52.999994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:59.765 [2024-11-26 18:39:53.000003] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:59.765 [2024-11-26 18:39:53.000013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:53.000027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:59.765 [2024-11-26 18:39:53.000036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:59.765 [2024-11-26 18:39:53.000045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:59.765 [2024-11-26 18:39:53.000055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:59.765 [2024-11-26 18:39:53.000066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.765 [2024-11-26 18:39:53.000076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:59.765 [2024-11-26 18:39:53.000085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.905 ms 00:35:59.765 [2024-11-26 18:39:53.000095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.765 [2024-11-26 18:39:53.039233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.765 [2024-11-26 18:39:53.039382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:59.765 [2024-11-26 18:39:53.039420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.141 ms 00:35:59.765 [2024-11-26 18:39:53.039444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.765 [2024-11-26 18:39:53.039526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.765 [2024-11-26 18:39:53.039553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:59.765 [2024-11-26 18:39:53.039577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:35:59.765 [2024-11-26 18:39:53.039600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.765 [2024-11-26 18:39:53.089858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.765 [2024-11-26 18:39:53.089987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:59.765 [2024-11-26 18:39:53.090019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.208 ms 00:35:59.765 [2024-11-26 18:39:53.090041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.765 [2024-11-26 18:39:53.090119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.765 [2024-11-26 18:39:53.090142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:59.765 [2024-11-26 18:39:53.090168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:59.765 [2024-11-26 18:39:53.090188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.765 [2024-11-26 18:39:53.090323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.765 [2024-11-26 18:39:53.090369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:59.765 [2024-11-26 18:39:53.090398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:35:59.765 [2024-11-26 18:39:53.090419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:59.765 [2024-11-26 18:39:53.090490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:59.765 [2024-11-26 18:39:53.090522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:59.765 [2024-11-26 18:39:53.090549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:35:59.765 [2024-11-26 18:39:53.090585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.053 [2024-11-26 18:39:53.112315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.053 [2024-11-26 18:39:53.112440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:00.053 [2024-11-26 18:39:53.112472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.718 ms 00:36:00.053 [2024-11-26 18:39:53.112498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.053 [2024-11-26 18:39:53.112728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.053 [2024-11-26 18:39:53.112788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:36:00.053 [2024-11-26 18:39:53.112820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:36:00.053 [2024-11-26 18:39:53.112850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.053 [2024-11-26 18:39:53.156050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.053 [2024-11-26 18:39:53.156191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:36:00.053 [2024-11-26 18:39:53.156252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.213 ms 00:36:00.053 [2024-11-26 18:39:53.156278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.053 [2024-11-26 18:39:53.174948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.053 [2024-11-26 18:39:53.175091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:00.053 [2024-11-26 18:39:53.175128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:36:00.053 [2024-11-26 18:39:53.175150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.053 [2024-11-26 18:39:53.276064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.053 [2024-11-26 18:39:53.276229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:36:00.053 [2024-11-26 18:39:53.276271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.951 ms 00:36:00.053 [2024-11-26 18:39:53.276296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.053 [2024-11-26 18:39:53.276571] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:36:00.053 [2024-11-26 18:39:53.276787] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:36:00.053 [2024-11-26 18:39:53.276970] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:36:00.053 [2024-11-26 18:39:53.277141] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:36:00.053 [2024-11-26 18:39:53.277188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.053 [2024-11-26 18:39:53.277219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:36:00.053 [2024-11-26 18:39:53.277254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.776 ms 00:36:00.053 [2024-11-26 18:39:53.277282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.053 [2024-11-26 18:39:53.277422] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:36:00.053 [2024-11-26 18:39:53.277478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.053 [2024-11-26 18:39:53.277522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:36:00.053 [2024-11-26 18:39:53.277558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:36:00.053 [2024-11-26 18:39:53.277587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.053 [2024-11-26 18:39:53.305736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.053 [2024-11-26 18:39:53.305892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:36:00.053 [2024-11-26 18:39:53.305932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.108 ms 00:36:00.053 [2024-11-26 18:39:53.305965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.053 [2024-11-26 18:39:53.323515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.053 [2024-11-26 18:39:53.323665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:36:00.053 [2024-11-26 18:39:53.323684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:36:00.053 [2024-11-26 18:39:53.323693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.053 [2024-11-26 18:39:53.323829] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:36:00.053 [2024-11-26 18:39:53.324043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.053 [2024-11-26 18:39:53.324056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:36:00.053 [2024-11-26 18:39:53.324066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.217 ms 00:36:00.053 [2024-11-26 18:39:53.324076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.621 [2024-11-26 18:39:53.776581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.621 [2024-11-26 18:39:53.776662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:36:00.621 [2024-11-26 18:39:53.776679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 451.725 ms 00:36:00.621 [2024-11-26 18:39:53.776692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.621 [2024-11-26 18:39:53.783007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.622 [2024-11-26 18:39:53.783104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:36:00.622 [2024-11-26 18:39:53.783119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.231 ms 00:36:00.622 [2024-11-26 18:39:53.783134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.622 [2024-11-26 18:39:53.783529] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:36:00.622 [2024-11-26 18:39:53.783552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.622 [2024-11-26 18:39:53.783562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:36:00.622 [2024-11-26 18:39:53.783572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.387 ms 00:36:00.622 [2024-11-26 18:39:53.783581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.622 [2024-11-26 18:39:53.783644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.622 [2024-11-26 18:39:53.783657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:36:00.622 [2024-11-26 18:39:53.783666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:36:00.622 [2024-11-26 18:39:53.783679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:00.622 [2024-11-26 18:39:53.783712] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 460.780 ms, result 0 00:36:00.622 [2024-11-26 18:39:53.783753] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:36:00.622 [2024-11-26 18:39:53.783833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:00.622 [2024-11-26 18:39:53.783849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:36:00.622 [2024-11-26 18:39:53.783858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:36:00.622 [2024-11-26 18:39:53.783865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.190 [2024-11-26 18:39:54.290708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.190 [2024-11-26 18:39:54.290874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:36:01.190 [2024-11-26 18:39:54.290920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 506.636 ms 00:36:01.190 [2024-11-26 18:39:54.290941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.190 [2024-11-26 18:39:54.297409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.190 [2024-11-26 18:39:54.297465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:36:01.190 [2024-11-26 18:39:54.297479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.133 ms 00:36:01.190 [2024-11-26 18:39:54.297489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.190 [2024-11-26 18:39:54.297893] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:36:01.190 [2024-11-26 18:39:54.297923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.190 [2024-11-26 18:39:54.297954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:36:01.190 [2024-11-26 18:39:54.297965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.404 ms 00:36:01.190 [2024-11-26 18:39:54.297975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.190 [2024-11-26 18:39:54.298008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.190 [2024-11-26 18:39:54.298021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:36:01.190 [2024-11-26 18:39:54.298031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:01.190 [2024-11-26 18:39:54.298052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.298095] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 515.326 ms, result 0 00:36:01.191 [2024-11-26 18:39:54.298143] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:01.191 [2024-11-26 18:39:54.298155] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:36:01.191 [2024-11-26 18:39:54.298167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.191 [2024-11-26 18:39:54.298176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:36:01.191 [2024-11-26 18:39:54.298185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 976.247 ms 00:36:01.191 [2024-11-26 18:39:54.298194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.298227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.191 [2024-11-26 18:39:54.298241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:36:01.191 [2024-11-26 18:39:54.298252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:36:01.191 [2024-11-26 18:39:54.298260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.310082] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:01.191 [2024-11-26 18:39:54.310217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.191 [2024-11-26 18:39:54.310230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:01.191 [2024-11-26 18:39:54.310240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.959 ms 00:36:01.191 [2024-11-26 18:39:54.310248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.310869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.191 [2024-11-26 18:39:54.310897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:36:01.191 [2024-11-26 18:39:54.310907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:36:01.191 [2024-11-26 18:39:54.310914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.312974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.191 [2024-11-26 18:39:54.313067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:36:01.191 [2024-11-26 18:39:54.313082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.045 ms 00:36:01.191 [2024-11-26 18:39:54.313091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.313161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.191 [2024-11-26 18:39:54.313173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:36:01.191 [2024-11-26 18:39:54.313188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:36:01.191 [2024-11-26 18:39:54.313196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.313309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.191 [2024-11-26 18:39:54.313321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:01.191 [2024-11-26 18:39:54.313331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:36:01.191 [2024-11-26 18:39:54.313340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.313363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.191 [2024-11-26 18:39:54.313373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:01.191 [2024-11-26 18:39:54.313382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:01.191 [2024-11-26 18:39:54.313391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.313425] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:36:01.191 [2024-11-26 18:39:54.313436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.191 [2024-11-26 18:39:54.313445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:36:01.191 [2024-11-26 18:39:54.313454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:36:01.191 [2024-11-26 18:39:54.313463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.313518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:01.191 [2024-11-26 18:39:54.313528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:01.191 [2024-11-26 18:39:54.313537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:36:01.191 [2024-11-26 18:39:54.313547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:01.191 [2024-11-26 18:39:54.314584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1372.121 ms, result 0 00:36:01.191 [2024-11-26 18:39:54.326936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.191 [2024-11-26 18:39:54.342915] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:01.191 [2024-11-26 18:39:54.352174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:36:01.191 Validate MD5 checksum, iteration 1 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:01.191 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:01.191 [2024-11-26 18:39:54.473154] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:36:01.191 [2024-11-26 18:39:54.473370] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84910 ] 00:36:01.450 [2024-11-26 18:39:54.646291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.710 [2024-11-26 18:39:54.805460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.627  [2024-11-26T18:39:57.529Z] Copying: 547/1024 [MB] (547 MBps) [2024-11-26T18:39:59.436Z] Copying: 1024/1024 [MB] (average 530 MBps) 00:36:06.101 00:36:06.101 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:06.101 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cdfc177c9fff589dbf92f8bb77220a8d 00:36:08.633 Validate MD5 checksum, iteration 2 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cdfc177c9fff589dbf92f8bb77220a8d != \c\d\f\c\1\7\7\c\9\f\f\f\5\8\9\d\b\f\9\2\f\8\b\b\7\7\2\2\0\a\8\d ]] 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:08.633 18:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:08.633 [2024-11-26 18:40:01.518719] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:36:08.633 [2024-11-26 18:40:01.518967] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84987 ] 00:36:08.633 [2024-11-26 18:40:01.697565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.633 [2024-11-26 18:40:01.851034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.536  [2024-11-26T18:40:04.855Z] Copying: 537/1024 [MB] (537 MBps) [2024-11-26T18:40:06.229Z] Copying: 1024/1024 [MB] (average 541 MBps) 00:36:12.894 00:36:12.894 18:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:36:12.894 18:40:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:14.795 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:14.795 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3752224598bac0e2df095d5a49ad2cc3 00:36:14.795 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3752224598bac0e2df095d5a49ad2cc3 != \3\7\5\2\2\2\4\5\9\8\b\a\c\0\e\2\d\f\0\9\5\d\5\a\4\9\a\d\2\c\c\3 ]] 00:36:14.795 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:14.795 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:14.795 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:36:14.795 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:36:14.795 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:36:14.795 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84875 ]] 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84875 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84875 ']' 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84875 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84875 00:36:15.054 killing process with pid 84875 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84875' 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84875 00:36:15.054 18:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84875 00:36:16.439 [2024-11-26 18:40:09.578069] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:36:16.439 [2024-11-26 18:40:09.598082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.598158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:36:16.439 [2024-11-26 18:40:09.598174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:16.439 [2024-11-26 18:40:09.598185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.598214] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:36:16.439 [2024-11-26 18:40:09.602622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.602682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:36:16.439 [2024-11-26 18:40:09.602711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.394 ms 00:36:16.439 [2024-11-26 18:40:09.602723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.602976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.602994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:36:16.439 [2024-11-26 18:40:09.603007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.217 ms 00:36:16.439 [2024-11-26 18:40:09.603016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.604249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.604293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:36:16.439 [2024-11-26 18:40:09.604308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.212 ms 00:36:16.439 [2024-11-26 18:40:09.604329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.605466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.605557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:36:16.439 [2024-11-26 18:40:09.605573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.098 ms 00:36:16.439 [2024-11-26 18:40:09.605584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.623381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.623566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:36:16.439 [2024-11-26 18:40:09.623669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.733 ms 00:36:16.439 [2024-11-26 18:40:09.623711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.633009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.633192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:36:16.439 [2024-11-26 18:40:09.633246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.212 ms 00:36:16.439 [2024-11-26 18:40:09.633306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.633486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.633561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:36:16.439 [2024-11-26 18:40:09.633606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:36:16.439 [2024-11-26 18:40:09.633691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.652144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.652327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:36:16.439 [2024-11-26 18:40:09.652379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.420 ms 00:36:16.439 [2024-11-26 18:40:09.652412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.670272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.670449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:36:16.439 [2024-11-26 18:40:09.670509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.786 ms 00:36:16.439 [2024-11-26 18:40:09.670539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.688393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.688569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:36:16.439 [2024-11-26 18:40:09.688633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.776 ms 00:36:16.439 [2024-11-26 18:40:09.688664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.706611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.706810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:36:16.439 [2024-11-26 18:40:09.706867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.804 ms 00:36:16.439 [2024-11-26 18:40:09.706898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.707008] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:36:16.439 [2024-11-26 18:40:09.707066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:16.439 [2024-11-26 18:40:09.707142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:36:16.439 [2024-11-26 18:40:09.707211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:36:16.439 [2024-11-26 18:40:09.707321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.707395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.707455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.707534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.707629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.707701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.707807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.707882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.707979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.708062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.708141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.708220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.708297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.708372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.708451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:16.439 [2024-11-26 18:40:09.708532] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:36:16.439 [2024-11-26 18:40:09.708589] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3134470a-7984-414f-a04d-a7ad07872006 00:36:16.439 [2024-11-26 18:40:09.708688] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:36:16.439 [2024-11-26 18:40:09.708737] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:36:16.439 [2024-11-26 18:40:09.708778] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:36:16.439 [2024-11-26 18:40:09.708826] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:36:16.439 [2024-11-26 18:40:09.708882] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:36:16.439 [2024-11-26 18:40:09.708929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:36:16.439 [2024-11-26 18:40:09.708986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:36:16.439 [2024-11-26 18:40:09.709027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:36:16.439 [2024-11-26 18:40:09.709073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:36:16.439 [2024-11-26 18:40:09.709120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.709163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:36:16.439 [2024-11-26 18:40:09.709216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.119 ms 00:36:16.439 [2024-11-26 18:40:09.709265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.733884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.734069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:36:16.439 [2024-11-26 18:40:09.734117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.558 ms 00:36:16.439 [2024-11-26 18:40:09.734147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.439 [2024-11-26 18:40:09.734900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:16.439 [2024-11-26 18:40:09.734966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:36:16.439 [2024-11-26 18:40:09.735010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.601 ms 00:36:16.439 [2024-11-26 18:40:09.735055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.699 [2024-11-26 18:40:09.813317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.699 [2024-11-26 18:40:09.813396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:16.699 [2024-11-26 18:40:09.813413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.699 [2024-11-26 18:40:09.813434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.699 [2024-11-26 18:40:09.813498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.699 [2024-11-26 18:40:09.813510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:16.699 [2024-11-26 18:40:09.813522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.699 [2024-11-26 18:40:09.813533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.699 [2024-11-26 18:40:09.813713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.699 [2024-11-26 18:40:09.813731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:16.699 [2024-11-26 18:40:09.813743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.699 [2024-11-26 18:40:09.813754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.699 [2024-11-26 18:40:09.813787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.699 [2024-11-26 18:40:09.813800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:16.699 [2024-11-26 18:40:09.813812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.699 [2024-11-26 18:40:09.813823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.699 [2024-11-26 18:40:09.953454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.699 [2024-11-26 18:40:09.953569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:16.699 [2024-11-26 18:40:09.953588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.699 [2024-11-26 18:40:09.953599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.958 [2024-11-26 18:40:10.073165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.958 [2024-11-26 18:40:10.073246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:16.958 [2024-11-26 18:40:10.073263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.958 [2024-11-26 18:40:10.073274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.958 [2024-11-26 18:40:10.073404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.958 [2024-11-26 18:40:10.073418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:16.958 [2024-11-26 18:40:10.073429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.958 [2024-11-26 18:40:10.073440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.958 [2024-11-26 18:40:10.073498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.958 [2024-11-26 18:40:10.073543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:16.959 [2024-11-26 18:40:10.073556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.959 [2024-11-26 18:40:10.073566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.959 [2024-11-26 18:40:10.073750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.959 [2024-11-26 18:40:10.073766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:16.959 [2024-11-26 18:40:10.073778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.959 [2024-11-26 18:40:10.073789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.959 [2024-11-26 18:40:10.073837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.959 [2024-11-26 18:40:10.073850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:36:16.959 [2024-11-26 18:40:10.073866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.959 [2024-11-26 18:40:10.073876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.959 [2024-11-26 18:40:10.073921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.959 [2024-11-26 18:40:10.073933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:16.959 [2024-11-26 18:40:10.073943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.959 [2024-11-26 18:40:10.073954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.959 [2024-11-26 18:40:10.074006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:16.959 [2024-11-26 18:40:10.074031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:16.959 [2024-11-26 18:40:10.074042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:16.959 [2024-11-26 18:40:10.074052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:16.959 [2024-11-26 18:40:10.074192] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 476.993 ms, result 0 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:18.335 Remove shared memory files 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84609 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:36:18.335 ************************************ 00:36:18.335 END TEST ftl_upgrade_shutdown 00:36:18.335 ************************************ 00:36:18.335 00:36:18.335 real 1m43.121s 00:36:18.335 user 2m25.481s 00:36:18.335 sys 0m24.075s 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:18.335 18:40:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:18.335 18:40:11 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:36:18.335 18:40:11 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:18.335 Process with pid 77475 is not found 00:36:18.335 18:40:11 ftl -- ftl/ftl.sh@14 -- # killprocess 77475 00:36:18.335 18:40:11 ftl -- common/autotest_common.sh@954 -- # '[' -z 77475 ']' 00:36:18.335 18:40:11 ftl -- common/autotest_common.sh@958 -- # kill -0 77475 00:36:18.335 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77475) - No such process 00:36:18.335 18:40:11 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77475 is not found' 00:36:18.335 18:40:11 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:18.335 18:40:11 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85122 00:36:18.335 18:40:11 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85122 00:36:18.335 18:40:11 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:18.335 18:40:11 ftl -- common/autotest_common.sh@835 -- # '[' -z 85122 ']' 00:36:18.335 18:40:11 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.335 18:40:11 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.335 18:40:11 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.335 18:40:11 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.335 18:40:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:18.594 [2024-11-26 18:40:11.718048] Starting SPDK v25.01-pre git sha1 e93f0f941 / DPDK 24.03.0 initialization... 00:36:18.594 [2024-11-26 18:40:11.718259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85122 ] 00:36:18.594 [2024-11-26 18:40:11.880127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.853 [2024-11-26 18:40:12.006447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.788 18:40:12 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:19.788 18:40:12 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:19.789 18:40:12 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:20.047 nvme0n1 00:36:20.047 18:40:13 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:20.047 18:40:13 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:20.047 18:40:13 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:20.305 18:40:13 ftl -- ftl/common.sh@28 -- # stores=eb2041e0-1743-4172-9374-29f1d1b3676e 00:36:20.305 18:40:13 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:20.305 18:40:13 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb2041e0-1743-4172-9374-29f1d1b3676e 00:36:20.563 18:40:13 ftl -- ftl/ftl.sh@23 -- # killprocess 85122 00:36:20.563 18:40:13 ftl -- common/autotest_common.sh@954 -- # '[' -z 85122 ']' 00:36:20.563 18:40:13 ftl -- common/autotest_common.sh@958 -- # kill -0 85122 00:36:20.563 18:40:13 ftl -- common/autotest_common.sh@959 -- # uname 00:36:20.563 18:40:13 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:20.563 18:40:13 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85122 00:36:20.563 killing process with pid 85122 00:36:20.564 18:40:13 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:20.564 18:40:13 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:20.564 18:40:13 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85122' 00:36:20.564 18:40:13 ftl -- common/autotest_common.sh@973 -- # kill 85122 00:36:20.564 18:40:13 ftl -- common/autotest_common.sh@978 -- # wait 85122 00:36:23.850 18:40:16 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:23.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:23.850 Waiting for block devices as requested 00:36:23.850 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:23.850 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:23.850 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:24.109 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:29.475 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:29.475 Remove shared memory files 00:36:29.475 18:40:22 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:29.475 18:40:22 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:29.475 18:40:22 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:29.475 18:40:22 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:29.475 18:40:22 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:29.475 18:40:22 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:29.475 18:40:22 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:29.475 ************************************ 00:36:29.475 END TEST ftl 00:36:29.475 ************************************ 00:36:29.475 00:36:29.475 real 11m16.853s 00:36:29.475 user 14m18.124s 00:36:29.475 sys 1m21.781s 00:36:29.475 18:40:22 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:29.475 18:40:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:29.475 18:40:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:29.475 18:40:22 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:29.475 18:40:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:29.475 18:40:22 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:29.475 18:40:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:29.475 18:40:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:29.475 18:40:22 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:29.475 18:40:22 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:29.475 18:40:22 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:29.475 18:40:22 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:29.475 18:40:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:29.475 18:40:22 -- common/autotest_common.sh@10 -- # set +x 00:36:29.475 18:40:22 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:29.475 18:40:22 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:29.475 18:40:22 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:29.475 18:40:22 -- common/autotest_common.sh@10 -- # set +x 00:36:31.379 INFO: APP EXITING 00:36:31.379 INFO: killing all VMs 00:36:31.379 INFO: killing vhost app 00:36:31.379 INFO: EXIT DONE 00:36:31.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:31.896 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:31.896 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:32.154 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:32.154 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:32.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:32.982 Cleaning 00:36:32.982 Removing: /var/run/dpdk/spdk0/config 00:36:32.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:32.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:32.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:32.982 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:32.982 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:32.982 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:32.982 Removing: /var/run/dpdk/spdk0 00:36:32.982 Removing: /var/run/dpdk/spdk_pid57807 00:36:32.982 Removing: /var/run/dpdk/spdk_pid58059 00:36:32.982 Removing: /var/run/dpdk/spdk_pid58299 00:36:32.982 Removing: /var/run/dpdk/spdk_pid58403 00:36:32.982 Removing: /var/run/dpdk/spdk_pid58459 00:36:32.982 Removing: /var/run/dpdk/spdk_pid58598 00:36:32.982 Removing: /var/run/dpdk/spdk_pid58622 00:36:32.982 Removing: /var/run/dpdk/spdk_pid58837 00:36:32.982 Removing: /var/run/dpdk/spdk_pid58961 00:36:32.982 Removing: /var/run/dpdk/spdk_pid59072 00:36:32.982 Removing: /var/run/dpdk/spdk_pid59201 00:36:32.982 Removing: /var/run/dpdk/spdk_pid59320 00:36:32.982 Removing: /var/run/dpdk/spdk_pid59354 00:36:32.982 Removing: /var/run/dpdk/spdk_pid59396 00:36:32.982 Removing: /var/run/dpdk/spdk_pid59472 00:36:32.982 Removing: /var/run/dpdk/spdk_pid59589 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60044 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60133 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60207 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60229 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60393 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60409 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60563 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60579 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60654 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60678 00:36:32.982 Removing: /var/run/dpdk/spdk_pid60747 00:36:33.244 Removing: /var/run/dpdk/spdk_pid60767 00:36:33.244 Removing: /var/run/dpdk/spdk_pid60973 00:36:33.244 Removing: /var/run/dpdk/spdk_pid61010 00:36:33.244 Removing: /var/run/dpdk/spdk_pid61099 00:36:33.244 Removing: /var/run/dpdk/spdk_pid61293 00:36:33.244 Removing: /var/run/dpdk/spdk_pid61388 00:36:33.244 Removing: /var/run/dpdk/spdk_pid61430 00:36:33.244 Removing: /var/run/dpdk/spdk_pid61891 00:36:33.244 Removing: /var/run/dpdk/spdk_pid61999 00:36:33.244 Removing: /var/run/dpdk/spdk_pid62120 00:36:33.244 Removing: /var/run/dpdk/spdk_pid62179 00:36:33.244 Removing: /var/run/dpdk/spdk_pid62210 00:36:33.244 Removing: /var/run/dpdk/spdk_pid62294 00:36:33.244 Removing: /var/run/dpdk/spdk_pid62937 00:36:33.244 Removing: /var/run/dpdk/spdk_pid62984 00:36:33.244 Removing: /var/run/dpdk/spdk_pid63479 00:36:33.244 Removing: /var/run/dpdk/spdk_pid63588 00:36:33.244 Removing: /var/run/dpdk/spdk_pid63720 00:36:33.244 Removing: /var/run/dpdk/spdk_pid63773 00:36:33.244 Removing: /var/run/dpdk/spdk_pid63804 00:36:33.244 Removing: /var/run/dpdk/spdk_pid63835 00:36:33.244 Removing: /var/run/dpdk/spdk_pid65717 00:36:33.244 Removing: /var/run/dpdk/spdk_pid65866 00:36:33.244 Removing: /var/run/dpdk/spdk_pid65876 00:36:33.244 Removing: /var/run/dpdk/spdk_pid65893 00:36:33.244 Removing: /var/run/dpdk/spdk_pid65962 00:36:33.244 Removing: /var/run/dpdk/spdk_pid65966 00:36:33.244 Removing: /var/run/dpdk/spdk_pid65983 00:36:33.244 Removing: /var/run/dpdk/spdk_pid66051 00:36:33.244 Removing: /var/run/dpdk/spdk_pid66059 00:36:33.244 Removing: /var/run/dpdk/spdk_pid66071 00:36:33.244 Removing: /var/run/dpdk/spdk_pid66150 00:36:33.244 Removing: /var/run/dpdk/spdk_pid66159 00:36:33.244 Removing: /var/run/dpdk/spdk_pid66177 00:36:33.244 Removing: /var/run/dpdk/spdk_pid67658 00:36:33.244 Removing: /var/run/dpdk/spdk_pid67777 00:36:33.244 Removing: /var/run/dpdk/spdk_pid69197 00:36:33.244 Removing: /var/run/dpdk/spdk_pid70937 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71022 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71097 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71211 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71310 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71411 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71497 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71578 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71693 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71791 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71896 00:36:33.244 Removing: /var/run/dpdk/spdk_pid71985 00:36:33.244 Removing: /var/run/dpdk/spdk_pid72067 00:36:33.244 Removing: /var/run/dpdk/spdk_pid72182 00:36:33.244 Removing: /var/run/dpdk/spdk_pid72274 00:36:33.244 Removing: /var/run/dpdk/spdk_pid72381 00:36:33.244 Removing: /var/run/dpdk/spdk_pid72456 00:36:33.244 Removing: /var/run/dpdk/spdk_pid72542 00:36:33.244 Removing: /var/run/dpdk/spdk_pid72646 00:36:33.244 Removing: /var/run/dpdk/spdk_pid72745 00:36:33.244 Removing: /var/run/dpdk/spdk_pid72846 00:36:33.244 Removing: /var/run/dpdk/spdk_pid72929 00:36:33.244 Removing: /var/run/dpdk/spdk_pid73005 00:36:33.244 Removing: /var/run/dpdk/spdk_pid73085 00:36:33.244 Removing: /var/run/dpdk/spdk_pid73159 00:36:33.244 Removing: /var/run/dpdk/spdk_pid73268 00:36:33.244 Removing: /var/run/dpdk/spdk_pid73364 00:36:33.524 Removing: /var/run/dpdk/spdk_pid73465 00:36:33.524 Removing: /var/run/dpdk/spdk_pid73544 00:36:33.524 Removing: /var/run/dpdk/spdk_pid73624 00:36:33.524 Removing: /var/run/dpdk/spdk_pid73704 00:36:33.524 Removing: /var/run/dpdk/spdk_pid73774 00:36:33.524 Removing: /var/run/dpdk/spdk_pid73883 00:36:33.524 Removing: /var/run/dpdk/spdk_pid73979 00:36:33.524 Removing: /var/run/dpdk/spdk_pid74126 00:36:33.524 Removing: /var/run/dpdk/spdk_pid74422 00:36:33.524 Removing: /var/run/dpdk/spdk_pid74459 00:36:33.524 Removing: /var/run/dpdk/spdk_pid74913 00:36:33.524 Removing: /var/run/dpdk/spdk_pid75109 00:36:33.525 Removing: /var/run/dpdk/spdk_pid75202 00:36:33.525 Removing: /var/run/dpdk/spdk_pid75316 00:36:33.525 Removing: /var/run/dpdk/spdk_pid75379 00:36:33.525 Removing: /var/run/dpdk/spdk_pid75409 00:36:33.525 Removing: /var/run/dpdk/spdk_pid75902 00:36:33.525 Removing: /var/run/dpdk/spdk_pid75970 00:36:33.525 Removing: /var/run/dpdk/spdk_pid76080 00:36:33.525 Removing: /var/run/dpdk/spdk_pid76506 00:36:33.525 Removing: /var/run/dpdk/spdk_pid76651 00:36:33.525 Removing: /var/run/dpdk/spdk_pid77475 00:36:33.525 Removing: /var/run/dpdk/spdk_pid77624 00:36:33.525 Removing: /var/run/dpdk/spdk_pid77894 00:36:33.525 Removing: /var/run/dpdk/spdk_pid77997 00:36:33.525 Removing: /var/run/dpdk/spdk_pid78328 00:36:33.525 Removing: /var/run/dpdk/spdk_pid78598 00:36:33.525 Removing: /var/run/dpdk/spdk_pid78990 00:36:33.525 Removing: /var/run/dpdk/spdk_pid79244 00:36:33.525 Removing: /var/run/dpdk/spdk_pid79379 00:36:33.525 Removing: /var/run/dpdk/spdk_pid79444 00:36:33.525 Removing: /var/run/dpdk/spdk_pid79575 00:36:33.525 Removing: /var/run/dpdk/spdk_pid79607 00:36:33.525 Removing: /var/run/dpdk/spdk_pid79671 00:36:33.525 Removing: /var/run/dpdk/spdk_pid79872 00:36:33.525 Removing: /var/run/dpdk/spdk_pid80142 00:36:33.525 Removing: /var/run/dpdk/spdk_pid80573 00:36:33.525 Removing: /var/run/dpdk/spdk_pid80988 00:36:33.525 Removing: /var/run/dpdk/spdk_pid81421 00:36:33.525 Removing: /var/run/dpdk/spdk_pid81899 00:36:33.525 Removing: /var/run/dpdk/spdk_pid82051 00:36:33.525 Removing: /var/run/dpdk/spdk_pid82135 00:36:33.525 Removing: /var/run/dpdk/spdk_pid82682 00:36:33.525 Removing: /var/run/dpdk/spdk_pid82745 00:36:33.525 Removing: /var/run/dpdk/spdk_pid83162 00:36:33.525 Removing: /var/run/dpdk/spdk_pid83506 00:36:33.525 Removing: /var/run/dpdk/spdk_pid83943 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84071 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84128 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84201 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84264 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84345 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84609 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84704 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84780 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84875 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84910 00:36:33.525 Removing: /var/run/dpdk/spdk_pid84987 00:36:33.525 Removing: /var/run/dpdk/spdk_pid85122 00:36:33.525 Clean 00:36:33.784 18:40:26 -- common/autotest_common.sh@1453 -- # return 0 00:36:33.784 18:40:26 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:33.784 18:40:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:33.784 18:40:26 -- common/autotest_common.sh@10 -- # set +x 00:36:33.784 18:40:26 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:33.784 18:40:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:33.784 18:40:26 -- common/autotest_common.sh@10 -- # set +x 00:36:33.784 18:40:26 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:33.784 18:40:26 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:33.784 18:40:26 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:33.784 18:40:26 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:33.784 18:40:26 -- spdk/autotest.sh@398 -- # hostname 00:36:33.784 18:40:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:34.043 geninfo: WARNING: invalid characters removed from testname! 00:37:00.670 18:40:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:03.963 18:40:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:06.493 18:40:59 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:09.056 18:41:02 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:11.590 18:41:04 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:14.125 18:41:07 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:16.711 18:41:09 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:16.711 18:41:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:16.711 18:41:09 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:16.711 18:41:09 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:16.711 18:41:09 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:16.711 18:41:09 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:16.711 + [[ -n 5454 ]] 00:37:16.711 + sudo kill 5454 00:37:16.721 [Pipeline] } 00:37:16.741 [Pipeline] // timeout 00:37:16.748 [Pipeline] } 00:37:16.765 [Pipeline] // stage 00:37:16.770 [Pipeline] } 00:37:16.786 [Pipeline] // catchError 00:37:16.796 [Pipeline] stage 00:37:16.798 [Pipeline] { (Stop VM) 00:37:16.811 [Pipeline] sh 00:37:17.092 + vagrant halt 00:37:21.284 ==> default: Halting domain... 00:37:27.868 [Pipeline] sh 00:37:28.150 + vagrant destroy -f 00:37:31.442 ==> default: Removing domain... 00:37:31.713 [Pipeline] sh 00:37:32.001 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:32.011 [Pipeline] } 00:37:32.028 [Pipeline] // stage 00:37:32.033 [Pipeline] } 00:37:32.052 [Pipeline] // dir 00:37:32.059 [Pipeline] } 00:37:32.077 [Pipeline] // wrap 00:37:32.085 [Pipeline] } 00:37:32.098 [Pipeline] // catchError 00:37:32.109 [Pipeline] stage 00:37:32.112 [Pipeline] { (Epilogue) 00:37:32.127 [Pipeline] sh 00:37:32.413 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:40.595 [Pipeline] catchError 00:37:40.597 [Pipeline] { 00:37:40.611 [Pipeline] sh 00:37:40.891 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:40.891 Artifacts sizes are good 00:37:40.900 [Pipeline] } 00:37:40.915 [Pipeline] // catchError 00:37:40.928 [Pipeline] archiveArtifacts 00:37:40.937 Archiving artifacts 00:37:41.063 [Pipeline] cleanWs 00:37:41.074 [WS-CLEANUP] Deleting project workspace... 00:37:41.074 [WS-CLEANUP] Deferred wipeout is used... 00:37:41.080 [WS-CLEANUP] done 00:37:41.082 [Pipeline] } 00:37:41.097 [Pipeline] // stage 00:37:41.102 [Pipeline] } 00:37:41.116 [Pipeline] // node 00:37:41.121 [Pipeline] End of Pipeline 00:37:41.155 Finished: SUCCESS